Job Description
Join a high-impact remote team focused on productionizing real-time ML systems for trust and safety. This backend-centric role involves building APIs, tools, and orchestration logic to integrate cutting-edge ML models and ensure system reliability and performance.
Key Information
- Location Model: Remote
- Location Details: Global
- Salary Range: Competitive
- Years Experience Min: 3 years
- Employment Type: Permanent
- Company Industry: SaaS (Trust & Safety/ML)
- Visa Sponsorship: N/A
- Relocation Assistance: N/A
- Working Hours: N/A
Technical Stack
- Core (Must-Have):
- Python
- API Design (REST, gRPC, FastAPI, Flask, etc.)
- Cloud Environments (GCP or AWS)
- CI/CD Workflows
- Containerization
- Nice-to-Have:
- Real-time ML inference or streaming data pipelines
- Trust & safety domain experience
- Vector databases (FAISS, Pinecone)
- Building internal tools/dashboards
Role & Responsibilities
- Key Responsibilities:
- Design and build secure, high-performance APIs for ML inference and workflows.
- Develop internal tools for policy, annotation, and review queue management.
- Integrate ML models (classifiers, LLMs) into backend systems.
- Build and maintain system monitoring for performance and reliability.
- Implement CI/CD pipelines and ensure scalability.
- Must-Have Qualifications:
- 3-8 years backend or systems engineering experience.
- Strong proficiency with Python or another modern backend language.
- Experience designing and deploying production APIs.
- Experience with GCP or AWS.
- Familiarity with CI/CD and containerization.
- Nice-to-Have Qualifications:
- Experience integrating ML models, inference systems, or vector search.
- Experience with Trust & Safety or moderation systems.
- Experience building dashboards or internal tools.
Company & Culture
- Benefits Highlights:
- N/A
- Potential Red Flags / Things to Note:
- Role involves working in ambiguous environments.
- Focus on moving quickly.
- Company Culture Snippets:
- High-impact role.
- Product mindset emphasized.
- Requires strong cross-functional collaboration (ML, Infra).
3–8 Years of Industry Experience | Remote | High-Impact
About the Role: We’re looking for a backend-focused software engineer to help productionize our ML systems for real-time use. You'll build APIs, orchestration logic, and internal tools that support moderation workflows, annotation interfaces, and monitoring dashboards. You’ll collaborate closely with ML and MLOps engineers to deliver high-reliability systems that power abuse detection, threat classification, and review queues. This is an applied role for someone who can own full backend service architecture and help turn complex models into accessible, performant APIs.
In This Role, You Will: Design and build secure, high-performance APIs for inference, review workflows, and system orchestration. Develop internal tools to manage policy configs, annotations, and review queues. Integrate ML classifiers and LLMs into backend systems for streaming or batch inference. Build and maintain system monitoring for latency, errors, throughput, and system performance. Implement CI/CD pipelines and ensure the reliability and scalability of backend services.
We’re Looking for Someone Who: Has 3–8 years of backend engineering experience (Python preferred). Has designed and deployed production APIs in high-reliability environments. Understands how to integrate ML models, inference systems, or vector search pipelines into backend applications. Has experience with modern infrastructure stacks (Docker, Terraform, CI/CD, GCP or AWS). Has strong knowledge of software development lifecycles and best practices. Brings a product mindset and cares about system design, speed, clarity, and reliability. Moves quickly in ambiguous environments and takes ownership end to end.
Requirements: 3–8 years of experience in backend or systems engineering roles. Strong proficiency with Python or another modern backend language. Experience designing APIs (REST, gRPC, FastAPI, Flask, etc.). Experience deploying production services in cloud environments (GCP or AWS). Familiarity with CI/CD workflows and containerization. Comfortable working cross-functionally with ML and infra teams. Clear communicator who can write well and explain system decisions to technical and non-technical teammates.
Nice to Have Experience With: Real-time ML inference or streaming data pipelines. Trust & safety, content moderation, or platform abuse detection. Experience integrating semantic search, vector databases (e.g., FAISS, Pinecone). Building dashboards or developer-facing internal tools.
What Success Looks Like in the First 3 Months: You've designed and deployed a stable, documented API for real-time model inference. You've integrated at least one ML model into a production system with sub-200ms latency. You've built internal tools to streamline model review, policy testing, or annotation workflows. You've implemented monitoring and observability tools that give visibility into backend and model performance.
About the Role: We’re looking for a backend-focused software engineer to help productionize our ML systems for real-time use. You'll build APIs, orchestration logic, and internal tools that support moderation workflows, annotation interfaces, and monitoring dashboards. You’ll collaborate closely with ML and MLOps engineers to deliver high-reliability systems that power abuse detection, threat classification, and review queues. This is an applied role for someone who can own full backend service architecture and help turn complex models into accessible, performant APIs.
In This Role, You Will: Design and build secure, high-performance APIs for inference, review workflows, and system orchestration. Develop internal tools to manage policy configs, annotations, and review queues. Integrate ML classifiers and LLMs into backend systems for streaming or batch inference. Build and maintain system monitoring for latency, errors, throughput, and system performance. Implement CI/CD pipelines and ensure the reliability and scalability of backend services.
We’re Looking for Someone Who: Has 3–8 years of backend engineering experience (Python preferred). Has designed and deployed production APIs in high-reliability environments. Understands how to integrate ML models, inference systems, or vector search pipelines into backend applications. Has experience with modern infrastructure stacks (Docker, Terraform, CI/CD, GCP or AWS). Has strong knowledge of software development lifecycles and best practices. Brings a product mindset and cares about system design, speed, clarity, and reliability. Moves quickly in ambiguous environments and takes ownership end to end.
Requirements: 3–8 years of experience in backend or systems engineering roles. Strong proficiency with Python or another modern backend language. Experience designing APIs (REST, gRPC, FastAPI, Flask, etc.). Experience deploying production services in cloud environments (GCP or AWS). Familiarity with CI/CD workflows and containerization. Comfortable working cross-functionally with ML and infra teams. Clear communicator who can write well and explain system decisions to technical and non-technical teammates.
Nice to Have Experience With: Real-time ML inference or streaming data pipelines. Trust & safety, content moderation, or platform abuse detection. Experience integrating semantic search, vector databases (e.g., FAISS, Pinecone). Building dashboards or developer-facing internal tools.
What Success Looks Like in the First 3 Months: You've designed and deployed a stable, documented API for real-time model inference. You've integrated at least one ML model into a production system with sub-200ms latency. You've built internal tools to streamline model review, policy testing, or annotation workflows. You've implemented monitoring and observability tools that give visibility into backend and model performance.
Job Details
Salary
$120K–$230K
Location
Remote / United States
Key Skills
DockerAwsPythonGcpCi/cdTerraformMlopsApi Design