bit floating-point formats for deep learning H/F
Commissariat à l’Énergie Atomique
Organisation The French Alternative Energies and Atomic Energy Commission (CEA) is a key player in research, development and innovation in four main areas :
Drawing on its widely acknowledged expertise, and thanks to its 16000 technicians, engineers, researchers and staff, the CEA actively participates in collaborative projects with a large number of academic and industrial partners.The CEA is established in ten centers spread throughout FranceReference 2024-32990Description de l’unitéLSTA laboratory (Advanced Technologies and Systems-on-chip Laboratory) works on the development of innovative chips for various application domains: Artificial Intelligence, High Performance Computing (HPC) and Quantum computing.
In this lab, the AI team works on designing chips to implement AI algorithms efficiently, and conversely, to design AI algorithms suited for specific hardware.Position descriptionCategoryMathematics, information, scientific, softwareContractInternshipJob title8-bit floating-point formats for deep learning H/FSubjectThe general goal of the proposed internship is to implement complete training of neural networks on diverse tasks using fp8 formats, and compare the results with 32-bit floating point (fp32), 16-bit floating-point (fp16), 8-bit fixed-point. If time allows, it may also encompass C++ implementation, energy measurements, cache miss/hit measurements, and/or implementation of other, more unusual numerical formats.Contract duration (months)6Job descriptionBy default, computations in a deep neural network are done with numbers represented in the 32-bit floating-point format (fp32). This format can represent a great variety of real-valued numbers but requires 4 bytes to store each number used, which can be a problem for memory-constrained environments such as embedded systems. 8-bit fixed-point (int8) is a common format for deep neural network inference [REF], which enables great compression with little loss in accuracy [REF]. But training a neural network in reduced precision is much less commonly done. When training, 8-bit fixed-point suffers from its relatively small dynamic range [REF], which incurs significant degradation in accuracy [REF].To correct this flaw, some authors [REF] proposed to make all computations in the learning phase in 8-bit floating-point format (fp8). They claim that it yields networks with just the same performances as networks trained in full precision at various tasks (language modelling, image classification). Yet, despite these promises, no library is publicly available to perform deep learning in 8 bits.
What comes with the offer:
Methods / MeansLinux / Slurm / Python / C++ (optionnel)Applicant ProfileThe ideal candidate should:
In line with CEA’s commitment to integrating people with disabilities, this job is open to all.Position locationSiteGrenobleJob locationFrance, Auvergne-Rhône-Alpes, Isère (38)LocationGrenobleCandidate criteriaLanguages
Prepared diplomaBac+5 – Master 2Recommended trainingEngineering school / University (Computer Science / Applied Maths)PhD opportunityOuiRequesterPosition start date09/01/2025
Isère
Sat, 28 Sep 2024 07:09:21 GMT
To help us track our recruitment effort, please indicate in your email/cover letter where (vacanciesineu.com) you saw this job posting.
Job title: Customer Service with German (Order Management O2C) Company: Sales HR Job description Witaj!…
vacanciesineu.com Requisition ID: 66443 ABOUT WHIRLPOOL CORPORATION Whirlpool Corporation (NYSE: WHR) is a leading kitchen…
vacanciesineu.com Job Description EMEA Analytics Specialists team is tasked with supporting the EMEA account teams…
vacanciesineu.com Betting CRM Manager To help us track our recruitment effort, please indicate in your…
vacanciesineu.com Position: Vendor Specialist Job Description: WHO WE ARE: Arrow Enterprise Computing Solutions (ECS) ,…
vacanciesineu.com This is Energizer Holdings, Inc. Energizer Holdings responsibly creates products to make lives easier…