Lead Security Architect
Access SystemsFull Time
Senior (5 to 8 years)
Candidates must have a minimum of 10 years of security engineering experience with demonstrated technical leadership, or an equivalent combination of related education and work experience. A strong background in data and ML security, with generative AI experience preferred, is essential. Deep understanding of AI/ML security risks such as adversarial attacks, model poisoning, data privacy, and bias is required. Experience with cloud AI/ML platforms (AWS Bedrock, Azure AI, GCP AI Platform, etc.), building security frameworks and tools from scratch, and strong programming skills in Python with ML frameworks (TensorFlow, PyTorch, etc.) are necessary. Familiarity with security assessment methodologies, risk management frameworks, and compliance/control frameworks (PCI DSS, SOX, SOC2, ISO27001, GDPR, NIST CSF) is also required. Financial services or fintech experience is strongly preferred.
The Principal AI Security Engineer will lead the technical AI security strategy across ML and Generative AI infrastructure, serving as the primary technical owner for AI risk management and the Model Risk Office from a security perspective. This role involves building comprehensive AI security tools, frameworks, and AI-powered security solutions to enable secure AI/ML systems at scale. Responsibilities include driving AI risk management across model security, adversarial attacks, data privacy, and AI supply chain security, while partnering with legal and privacy teams on governance. The engineer will also lead AI security strategy and risk assessment for customer-facing AI products, fraud detection models, LLMs, and recommendation systems. They will build and maintain AI security frameworks, tools, and monitoring capabilities for model validation and ongoing risk management, conduct security assessments of AI/ML model architectures, training pipelines, and deployment infrastructure, and develop security controls for AI/ML systems including adversarial attack prevention and bias mitigation. Additionally, the role includes mentoring security engineers and cross-functional teams on AI security best practices, partnering closely with AI/ML engineering, data science, product security, and compliance teams, creating secure AI development lifecycle practices and self-service security capabilities, and driving technical decisions for AI security architecture and implementation across the organization.
Card issuing and payment processing solutions
Marqeta provides modern card issuing and payment processing solutions in the fintech sector. Its platform allows businesses to create, issue, and manage payment cards tailored to their specific needs, such as expense management and consumer payments. The service operates through an open API, enabling clients to integrate Marqeta's capabilities into their own applications. This flexibility sets Marqeta apart from competitors, as it caters to a diverse range of clients, including financial institutions and tech companies. The company generates revenue primarily through transaction fees each time a card is used, along with potential setup and service fees. Marqeta's ability to quickly adapt to the growing demand for digital payments, especially during the COVID-19 pandemic, has contributed to its significant presence in the market.