Deep Learning Software Intern - Summer 2021
Academic and commercial groups around the world are using GPUs to power a revolution in deep learning, enabling breakthroughs in problems from image classification to speech recognition to natural language processing. We are a fast-paced team building tools and software to make design and deployment of new deep learning models easier and accessible to more data scientists.
We are looking for a Software Intern to join our Triton Inference Server team for a summer internship!
What you'll be doing:
In this role, you will develop software to serve predictions from trained neural networks running on GPUs. You will be an active member of the open source deep learning software engineering community. You will juggle a variety of objectives to include: build robust software that can be deployed in production server or cloud settings; understand new customer use cases work with product teams to define new capabilities; load-balance asynchronous requests across available resources; optimize prediction throughput under latency constraints; and integrate the latest open source technology.
What we need to see:
Currently pursuing a MS or PhD or equivalent experience in Computer Science, Computer Architecture, or related field.
Ability to work independently, define project goals and scope, interact directly with open source community, and manage your own development effort.
Strong C/C++ programming and software design skills, including debugging, performance analysis, and test design. Python experience also helpful.
Distributed systems programming
Excellent troubleshooting abilities spanning multiple software (storage systems, kernels and containers)
Contributing to a large open source project - use of GitHub, bug tracking, branching and merging code, OSS licensing issues handling patches, etc.
Experience building and deploying cloud services using HTTP REST, gRPC, protobuf, JSON and related technologies
Ways to stand out from the crowd:
You have experience with machine learning algorithms and frameworks such as Caffe, Torch, Theano and TensorFlow.
Familiar with container technologies, such as docker, singularity, and LXC
Experience with containers orchestrators such as Kubernetes, Docker Swarm, Mesos or Nomad.
Knowledgeable in GPU programming such as OpenCL or CUDA.
Occasional travel to conferences and for customer visits may be required.
We are widely considered to be one of the technology world’s most desirable employers. Come help us build the real-time, cost-effective computing platform driving our success in the dynamic and quickly growing field Deep Learning and Artificial Intelligence. NVIDIA offers highly competitive salaries and a comprehensive benefits package. We have some of the most forward-thinking and talented people in the world working for us and, due to unprecedented growth, our world-class engineering teams are growing fast. If you're a creative and autonomous engineer with real passion for technology, we want to hear from you!NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.#deeplearning