MeshyAI Logo

MeshyAI

Data Infrastructure Engineer

Posted 13 Days Ago
Remote
2 Locations
Senior level
Remote
2 Locations
Senior level
The Data Infrastructure Engineer will design and maintain data systems for large-scale AI model training, handling structured and unstructured datasets while ensuring data quality and scalability.
The summary above was generated by AI

About Meshy
Headquartered in the Silicon Valley, Meshy is the leading 3D generative AI company on a mission to Unleash 3D Creativity. Meshy makes it effortless for both professional artists and hobbyists to create unique 3D assets—turning text and images into stunning 3D models in just minutes. What once took weeks and $1,000 now takes 2 minutes and $1.
Our global team of top experts in computer graphics, AI, and art includes alumni from MIT, Stanford, Berkeley, as well as veterans from Nvidia and Microsoft. With 3 million users (and growing), Meshy is trusted by top developers and backed by premiere venture capital firms like Sequoia and GGV.

  • No. 1 popularity, among 3D AI tools, according to A16Z games,
  • No. 1 website traffic, among 3D AI tools, according to SimilarWeb (2M monthly visits),
  • Leading 3D foundation model, delighted texture & fine geometry,
  • $52M funding by Top VCs,
  • 2.5M users & 20M models generated!

Ethan Yuanming Hu serves as the founder and CEO. Ethan got his Ph.D. in graphics and AI from MIT, where he developed the Taichi GPU programming language (27K stars on GitHub, used by 300+ institutes). His Ph.D. thesis got a honorable mention of SIGGRAPH 2022 Outstanding Doctoral Dissertation Award and his research has been cited over 2700 times. his favorite animal is the llama.


About the Role:

We are seeking a Data Infrastructure Engineer to join our growing team. In this role, you will design, build, and operate distributed data systems that power large-scale ingestion, processing, and transformation of datasets used for AI model training. These datasets span traditional structured data as well as unstructured assets such as images and 3D models, which often require specialized preprocessing for pretraining and fine-tuning workflows.
 
This is a versatile role: you’ll own end-to-end pipelines (from ingestion to transformation), ensure data quality and scalability, and collaborate closely with ML researchers to prepare diverse datasets for cutting-edge model training. You’ll thrive in our fast-paced startup environment, where problem-solving, adaptability, and wearing multiple hats are the norm.

What You’ll Do:

  • Core Data Pipelines
    • Design, implement, and maintain distributed ingestion pipelines for structured and unstructured data (images, 3D/2D assets, binaries).
    • Build scalable ETL/ELT workflows to transform, validate, and enrich datasets for AI/ML model training and analytics.
  • Pretrain Data Processing
    • Support preprocessing of unstructured assets (e.g., images, 3D/2D models, video) for training pipelines, including format conversion, normalization, augmentation, and metadata extraction.
    • Implement validation and quality checks to ensure datasets meet ML training requirements.
    • Collaborate with ML researchers to quickly adapt pipelines to evolving pretraining and evaluation needs.
  • Distributed Systems & Storage
    • Architect pipelines across cloud object storage (S3, GCS, Azure Blob), data lakes, and metadata catalogs.
    • Optimize large-scale processing with distributed frameworks (Spark, Dask, Ray, Flink, or equivalents).
    • Implement partitioning, sharding, caching strategies, and observability (monitoring, logging, alerting) for reliable pipelines.
  • Infrastructure & DevOps
    • Use infrastructure-as-code (Terraform, Kubernetes, etc.) to manage scalable and reproducible environments.
    • Integrate CI/CD best practices for data workflows.
  • Data Governance & Collaboration
    • Maintain data lineage, reproducibility, and governance for datasets used in AI/ML pipelines.
    • Work cross-functionally with ML researchers, graphics/vision engineers, and platform teams.
    • Embrace versatility: switch between infrastructure-level challenges and asset/data-level problem solving.
    • Contribute to a culture of fast iteration, pragmatic trade-offs, and collaborative ownership.

What We’re Looking For:

  • Technical Background
    • 5+ years of experience in data engineering, distributed systems, or similar.
    • Strong programming skills in Python (plus Scala/Java/C++ a plus).
    • Solid skills in SQL for analytics, transformations, and warehouse/lakehouse integration.
    • Proficiency with distributed frameworks (Spark, Dask, Ray, Flink).
    • Familiarity with cloud platforms (AWS/GCP/Azure) and storage systems (S3, Parquet, Delta Lake, etc.).
    • Experience with workflow orchestration tools (Airflow, Prefect, Dagster).
  • Domain Skills (Preferred)
    • Experience handling large-scale unstructured datasets (images, video, binaries, or 3D/2D assets).
    • Familiarity with AI/ML training data pipelines, including dataset versioning, augmentation, and sharding.
    • Exposure to computer graphics or 3D/2D data processing is strongly preferred.
  • Mindset
    • Comfortable in a startup environment: versatile, self-directed, pragmatic, and adaptive.
    • Strong problem solver who enjoys tackling ambiguous challenges.
    • Commitment to building robust, maintainable, and observable systems.

Nice to Have:

  • Kubernetes for distributed workloads and orchestration.
  • Data warehouses or lakehouse platforms (Snowflake, BigQuery, Databricks, Redshift).
  • Familiarty GPU-accelerated computing and HPC clusters
  • Experience with 3D/2D asset processing (geometry transformations, rendering pipelines, texture handling).
  • Rendering engines (Blender, Unity, Unreal) for synthetic data generation.
  • Open-source contributions in ML infrastructure, distributed systems, or data platforms.
  • Familiarity with secure data handling and compliance
Our Values:
  • Brain: We value intelligence and the pursuit of knowledge. Our team is composed of some of the brightest minds in the industry.
  • Heart: We care deeply about our work, our users, and each other. Empathy and passion drive us forward.
  • Gut: We trust our instincts and are not afraid to take bold risks. Innovation requires courage.
  • Taste: We have a keen eye for quality and aesthetics. Our products are not just functional but also beautiful.
Why Join Meshy?
  • Competitive salary, equity, and benefits package.
  • Opportunity to work with a talented and passionate team at the forefront of AI and 3D technology.
  • Flexible work environment, with options for remote and on-site work.
  • Opportunities for fast professional growth and development.
  • An inclusive culture that values creativity, innovation, and collaboration.
  • Unlimited, flexible time off.

Benefits:

  • Competitive salary, benefits and stock options.
  • 401(k) plan for employees.
  • Comprehensive health, dental, and vision insurance.
  • The latest and best office equipment.

Top Skills

Airflow
AWS
Azure
C++
Dagster
Dask
Delta Lake
Flink
GCP
Java
Kubernetes
Parquet
Prefect
Python
Ray
S3
Scala
Spark
SQL
Terraform

Similar Jobs

5 Days Ago
Remote
Canada
Senior level
Senior level
Database • Analytics
Develop and maintain the auto-scaling infrastructure for ClickHouse Cloud, collaborating with cross-functional teams to improve cloud capabilities and build scalable distributed systems.
Top Skills: AWSAzureC++FastapiGCPGoKafkaKubernetesNumpyPandasPythonSpark
An Hour Ago
Remote
Canada
Senior level
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
Develop and implement sales strategies for a portfolio of Fortune 1000 accounts, engaging customers, building relationships, and achieving revenue targets while collaborating with internal teams.
Top Skills: ConfluenceJira Service ManagementJira SoftwareSalesforce
3 Hours Ago
Remote
Ontario, ON, CAN
Mid level
Mid level
AdTech • Digital Media • eCommerce • Marketing Tech
The ML Ops Engineer will operationalize machine learning models, optimize deployment pipelines, manage infrastructure for production environments, and collaborate with data scientists on enhancing model performance.
Top Skills: AWSDatabricksDelta Live TablesMlflowPysparkPythonSQL

What you need to know about the Montreal Tech Scene

With roots dating back to 1642, Montreal is often recognized for its French-inspired architecture and cobblestone streets lined with traditional shops and cafés. But what truly sets the city apart is how it blends its rich tradition with a modern edge, reflected in its evolving skyline and fast-growing tech industry. According to economic promotion agency Montréal International, the city ranks among the top in North America to invest in artificial intelligence, making it le spot idéal for job seekers who want the best of both worlds.

Key Facts About Montreal Tech

  • Number of Tech Workers: 255,000+ (2024, Tourisme Montréal)
  • Major Tech Employers: SAP, Google, Microsoft, Cisco
  • Key Industries: Artificial intelligence, machine learning, cybersecurity, cloud computing, web development
  • Funding Landscape: $1.47 billion in venture capital funding in 2024 (BetaKit)
  • Notable Investors: CIBC Innovation Banking, BDC Capital, Investissement Québec, Fonds de solidarité FTQ
  • Research Centers and Universities: McGill University, Université de Montréal, Concordia University, Mila Quebec, ÉTS Montréal

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account