Acronis
Acronis
Company overview

Acronis unifies data protection and cybersecurity to deliver integrated, automated cyber protection that solves the safety, accessibility, privacy, authenticity, and security (SAPAS) challenges of the modern digital world. With flexible deployment models that fit the demands of service providers and IT professionals, Acronis provides superior cyber protection for data, applications, and systems with innovative next-generation antivirus, backup, disaster recovery, and endpoint protection management solutions powered by AI. With advanced anti-malware powered by cutting-edge machine intelligence and blockchain based data authentication technologies, Acronis protects any environment – from cloud to hybrid to on premises – at a low and predictable cost. Founded in Singapore in 2003 and incorporated in Switzerland in 2008, Acronis now has more than 2,000 employees and offices in 34 locations worldwide. Its solutions are trusted by more than 5.5 million home users and 500,000 companies, and top-tier professional sports teams. Acronis products are available through over 50,000 partners and service providers in over 150 countries and 26 languages Our corporate culture is focused on making a positive impact on the lives of each employee and the communities in which we live. Mutual trust, respect, personal achievement, individual leadership, and a belief that we can contribute to the world everyday are the cornerstones of the Acronis Team.

Careers at Acronis

No results found

Latest jobs

Software Engineer - CalibrationSoftware Engineer - Calibration
Avride
Austin, United States (city)
ROR DeveloperROR Developer
Nomiso
Hyderabad, India (city)
Principal Product Manager - Data Acquisition and NormalizationNewPrincipal Product Manager - Data Acquisition and NormalizationNew
ID.me
Mountain View, United States (city)
$222k - $276k
Software Engineer (Security) NewSoftware Engineer (Security) New
Armis Security
Tel Aviv District, Israel (region)
See all
Published: 2025-11-22  •  Austin, United States (city)
On-site
Full-time
About the Team

Our team is responsible for the collection, storage, and processing of large-scale datasets generated by autonomous vehicles and delivery robots. This includes sensor data from cameras, lidars, radars, and other onboard systems. Scaling reliable storage and providing efficient compute tools is essential for supporting downstream teams—such as machine learning, simulation, and algorithm development. Our data processing stack incorporates specialized algorithms similar to those deployed directly on autonomous systems in the field.

About the Role

As a Software Engineer, Data Platform at Avride, you will be responsible for designing, building, and maintaining the core data and machine learning infrastructure with a strong focus on software design and code quality. You will design systems to ingest, process, and organize petabytes of telemetry and sensor data into a globally distributed data lake, enabling high-throughput, low-latency access to data for both model training and online inference. Your work will help ML engineers and data scientists iterate faster and deliver better-performing systems.

‍What You'll Do
  • Build and maintain robust data pipelines and core datasets to support simulation, analytics, and machine learning workflows, as well as business use cases
  • Design and implement scalable database architectures to manage massive and complex datasets, optimizing for performance, cost, and usability
  • Collaborate closely with internal teams such as Simulation, Perception, Prediction, and Planning to understand their data requirements and workflows
  • Evaluate, integrate, and extend open-source tools (e.g., Apache Spark, Ray, Apache Beam, Argo Workflows) as well as internal systems
What You'll Need
  • Strong proficiency in Python (required); experience with C++ is highly desirable
  • Proven ability to write high-quality, maintainable code and design scalable, robust systems
  • Experience with Kubernetes for deploying and managing distributed systems
  • Hands-on experience with large-scale open-source data infrastructure (e.g., Kafka, Flink, Cassandra, Redis)
  • Deep understanding of distributed systems and big data platforms, with experience managing petabyte-scale datasets
Nice to Have
  • Experience building and operating large-scale ML systems
  • Understanding of ML/AI workflows and experience with machine learning pipelines
  • Experience optimizing resource usage and performance in distributed environments
  • Familiarity with data visualization and dashboarding tools (e.g., Grafana, Apache Superset)
  • Experience with cloud-based infrastructure (e.g., AWS, GCP, Microsoft Azure)

Candidates are required to be authorized to work in the U.S. The employer is not offering relocation sponsorship, and remote work options are not available.