Threat Intelligence Analyst
VultrFull Time
Mid-level (3 to 4 years), Senior (5 to 8 years)
Candidates should possess 5+ years of experience in quantitative research, forecasting, or risk modeling roles within fields such as finance, technology, safety, security, or public policy. Deep fluency in statistical inference, forecasting, uncertainty quantification, and decision modeling, particularly under sparse or adversarial data conditions, is required. Expertise with modern toolchains including NumPyro, TensorFlow Probability, PyMC, Darts, GluonTS/Chronos, sktime, PyOD 2.0, River, and scikit-survival is essential, along with strong coding skills in Python/JAX/PyTorch or R and data-engineering fundamentals like SQL and Spark.
As a Quantitative Threat Forecasting Analyst, you will design and deploy probabilistic and Bayesian models to forecast threat emergence, detect anomalies, and quantify risk, often when signal is weak and timelines are short. You will build classical and deep-learning forecasts, develop real-time anomaly-detection pipelines, apply survival analysis and rare-event methods, and run stress tests and Monte Carlo simulations to evaluate threat likelihood and impact. Furthermore, you will collaborate across disciplines—investigations, engineering, policy—to embed statistical rigor into threat prioritization, guardrails, and product decisions, communicating insights through briefs, dashboards, and visualizations to drive executive action, and own production pipelines to support these efforts.
Develops safe and beneficial AI technologies
OpenAI develops and deploys artificial intelligence technologies aimed at benefiting humanity. The company creates advanced AI models capable of performing various tasks, such as automating processes and enhancing creativity. OpenAI's products, like Sora, allow users to generate videos from text descriptions, showcasing the versatility of its AI applications. Unlike many competitors, OpenAI operates under a capped profit model, which limits the profits it can make and ensures that excess earnings are redistributed to maximize the social benefits of AI. This commitment to safety and ethical considerations is central to its mission of ensuring that artificial general intelligence (AGI) serves all of humanity.