Lightmatter is a photonic computer company redefining what computers and human beings are capable of by building the engines that will power discoveries and drive progress in a sustainable way. With modern human progress relying heavily on computers, the world has hit a dead end with traditional transistors, and the prospect of constantly building data centers is an environmental nightmare. Lightmatter has created a solution in photonic computing: using photons instead of electrons to take advantage of their higher bandwidth.
Our company has combined electronics, photonics and new algorithms to create a next-generation computing platform for artificial intelligence. Lightmatter’s new processor and interconnect are faster, more efficient and cooler than anything created before.
If you are passionate about advanced AI technology and would like to develop scalable algorithms, hardware and ML techniques, join us!
- Develop parallel algorithms for balancing compute and communication (within an accelerator or between accelerators) to maximize throughput and minimize latency.
- Deliver hardware and software co-design targeted at low-latency inference.
- Influence the development of machine learning hardware by simulating low-latency and high throughput inference for different models.
- Publish and present new research at premier ML/CS conferences.
- MS in Computer Science or related fields; PhD strongly preferred
- 4+ years of industry experience
- Expert understanding of deep learning, parallel computing, compilers and/or hardware architecture.
- Experience in working with large ML/HPC workloads with distributed computing systems built with accelerators such as GPUs or TPUs.
- Experience with developing and modifying machine learning models for scalability.
- Experience or understanding of low precision training and inference.
- Strong technical understanding of advanced techniques used in parallel computing, deep learning and HPC.
- Ability to model complex workloads on different architecture proposals.
- Understanding of parallel computing architectures.
- Experience with scalable frameworks such as MPI, PyTorch distributed, CUDA, and NCCL.
- Highly proficient in deep learning programming languages and frameworks, e.g. Python, C++, CUDA, Tensorflow, PyTorch, JAX.
- Experience with practical problem solving with innovative algorithmic solutions.
- PhD in Computer Science or related field
- You have contributed first hand to important software or ML algorithms deployed in the industry.
- Experience working with compiler optimizations is a plus.
- Strong publication records in the field of machine learning, parallel computing, and/or computer architecture.
- You have demonstrated the ability to perform independent research.
- Prior experience with quantization and compression methods within deep learning.
- You are enthusiastic about new technologies, algorithms, and mathematics.
- Health Care Plan (Medical, Dental & Vision)
- Retirement Plan (401k, IRA)
- Life Insurance (Basic, Voluntary & AD&D)
- Paid Time Off (Vacation, Sick & Public Holidays)
- Family Leave (Maternity, Paternity)
- Short Term & Long Term Disability
- Training & Development
- Work From Home
- Free Food & Snacks
- Wellness Resources
- Stock Option Plan