+91 80748 68174 contactoffcampusjob@gmail.com

Ai Security Specialist - Lockedin Ai (remote, Full-time)

Mspress New York City, New York, US

About the Role

AI Security Specialist – LockedIn AI (Remote, Full-Time) LockedIn AI is the #1 real‑time AI interview and meeting copilot, trusted by over one million users worldwide. We are building a fast‑scaling AI‑native platform that powers real‑time assistance during interviews, coding assessments, and professional conversations. Our systems process sensitive user interactions in real time, making security a core pillar of our product. We are building safe, reliable, and privacy‑first AI systems that users can trust in high‑stakes environments. Role Overview We are looking for a hands‑on AI Security Specialist to protect, harden, and continuously improve the security posture of our AI systems, ML models, and data pipelines. This role sits at the intersection of cybersecurity, machine learning, and software engineering. You will secure the entire AI lifecycle – from training data and model weights to production inference endpoints. You will defend against AI‑specific threats such as prompt injection, model extraction, adversarial inputs, data leakage, and RAG vulnerabilities while ensuring secure, scalable AI deployment for over 1M users. The ideal candidate combines strong security fundamentals with deep understanding of modern AI systems and an attacker mindset. Key Responsibilities Conduct end‑to‑end security assessments of LLMs, speech systems, and RAG pipelines. Identify and mitigate vulnerabilities before production release. Build defenses against prompt injection, jailbreaks, data exfiltration, and model inversion. Implement LLM firewalls, guardrails, and input‑/output filtering systems. Apply OWASP LLM Top 10 and MITRE ATLAS frameworks for threat modeling. Continuously monitor AI systems for anomalies, drift, and adversarial behavior. Design and run AI red team exercises simulating real‑world attacks. Develop AI‑specific incident response playbooks. Build dashboards for real‑time security monitoring and alerting. Investigate and respond to model compromise or data leakage incidents. Embed security into the ML lifecycle: data, training, deployment, inference. Implement model signing, dependency verification, and container hardening. Enforce secure coding practices for AI services and APIs. Conduct security reviews of ML pipelines and inference systems. Protect training data, embeddings, model weights, and vector databases. Implement encryption at rest and in transit across all AI systems. Enforce RBAC, audit logs, and access control policies. Prevent sensitive data leakage through model outputs. Ensure compliance with privacy‑first product principles. Ensure compliance with GDPR, CCPA, EU AI Act, and AI governance standards. Conduct risk assessments for AI features and deployments. Maintain AI security documentation, policies, and runbooks. Evaluate models for fairness, bias, and robustness. Support audit readiness and regulatory requirements. Work closely with engineering, product, and research teams. Translate security risks into actionable engineering improvements. Stay updated on emerging AI attack techniques and defenses. Contribute to internal and external AI security knowledge sharing. Required Qualifications Experience 3+ years in cybersecurity, application security, or infrastructure security. At least 1+ year securing AI/ML or LLM‑based systems. Experience with adversarial ML, prompt injection defense, or LLM security. Hands‑on experience with security testing, red teaming, or penetration testing. Startup or fast‑paced environment experience preferred. Education Bachelor’s degree in Computer Science, Cybersecurity, or related field. Security certifications (CISSP, CEH, Security+, AWS Security Specialty) are a plus. Technical Skills Python for security automation and tooling Understanding of ML frameworks (PyTorch / TensorFlow) Experience with LLM security tools (NeMo Guardrails, Lakera, etc.) Familiarity with AWS / GCP / Azure security practices Knowledge of Docker, Kubernetes, and cloud security Experience with SIEM tools, logging, and monitoring systems Understanding of API, identity, and encryption security Core Skills Strong attacker mindset (adversarial thinking) Ability to translate risk into engineering solutions Clear technical communication and documentation Strong ownership and independence Cross‑functional collaboration skills Preferred Qualifications Experience with RAG security and vector database protection AI red teaming or adversarial ML research experience Knowledge of model supply chain security (ML‑BOM, provenance tracking) Experience with real‑time AI or streaming systems security Contributions to open‑source AI security tools or research Startup experience (Seed to Series A stage) Background in SaaS, edtech, or AI product security What We Offer Meaningful equity ownership in a fast‑growing AI company Impact on a product used by over 1 million users Remote‑first flexibility with optional NYC collaboration Work at the frontier of AI + security engineering High autonomy and ownership culture Opportunity to define AI security practices at scale Why Join Us Secure next‑generation real‑time AI systems Work on cutting‑edge AI threat models and defenses Build production security systems used in high‑stakes environments Join a fast‑moving AI‑native engineering team Solve complex, real‑world AI security challenges To Apply: Please submit a resume or CV. Short note including why you want to join, whether you’ve used the product, what security improvements you would suggest, and (optional) GitHub, security research, or portfolio work. Equal Opportunity Employer. #J-18808-Ljbffr

Responsibilities

  • Conduct end-to-end security assessments of AI systems
  • Implement defenses against prompt injection and data exfiltration
  • Monitor AI systems for anomalies and adversarial behavior

Qualifications

  • AI security knowledge
  • Experience with real-time/streaming AI security
  • Experience with threat modeling and security frameworks

Required Skills

Security testing Threat modeling Security engineering AI/ML security Red teaming

Keywords

AI security ML security RAG security threat modeling OWASP ATLAS

Interested in this role?

Apply now and take the next step in your career.

Apply Now

Job Overview

Date Posted 2 weeks ago
Location New York City, New York, US
Job Type Full-time
Work Mode Remote
Category Security defense, Ai security, Real time ai systems security

About the Company

Mspress