The AI Governance and Policy program works to improve soc...">
Back to Jobs

Program Officer – AI Information Security, AI Governance and Policy

Remote, USA Full-time Posted 2025-07-27

About the AI Governance & Policy (AIGP) Program

The AI Governance and Policy program works to improve society’s preparedness for transformative AI, particularly by mitigating global catastrophic risks. Our 11-person team aims to distribute over $150 million in grants annually over the coming years to organizations and individuals that focus on developing sound governance approaches, increasing governance capacity, and advancing policy research and advocacy that could reduce risks from transformative AI. You can read more about our priorities in our current Request for Proposals.

About Open Philanthropy

Open Philanthropy is a philanthropic funder and advisor; our mission is to help others as much as we can with the resources available to us. We stress openness to many possibilities and have chosen our focus areas based on importance, neglectedness, and tractability. Our current giving areas include potential risks from advanced artificial intelligence, global health and development, scientific research, global public health policy, farm animal welfare, and biosecurity and pandemic preparedness. In 2024, we recommended $650 million to high-impact causes, and we’ve recommended over $4 billion in grants since our formation.

About the role

We're seeking an expert in information security to own and lead our AI information security grantmaking program and develop our strategy in this critical area.

You'll initially work closely with the AIGP team while building toward establishing information security as a distinct grantmaking area. You'll develop and execute our AI information security strategy while managing a significant budget (likely in the $18-36 million/year range, possibly more depending on the quality of the opportunities you find).

Our work on AI information security includes safeguarding model weights and algorithmic insights, preventing system poisoning or sabotage, securing training data and compute resources, addressing vulnerabilities across the full machine learning supply chain (from compute resources to MLOps), and enabling secure third-party access for audits and evaluations. Your portfolio will likely span technical research, policy development, and ecosystem growth and support. Our previous grants have supported RAND's Meselson Center (which authored Securing Model Weights), security fieldbuilding projects such as Heron, and benchmarks like Cybench, CVEbench, and BountyBench.

Why we're hiring now: As advanced AI rapidly progresses from research to real-world deployment, the challenges of securing frontier AI systems against theft, subversion, or sabotage by capable actors will become increasingly important. Strong information security at frontier AI projects could help prevent catastrophic misuse, and enable safer periods for AI alignment and control research and implementation. Yet we believe that without significant effort, information security at a number of frontier AI projects over the next few years will not be sufficient to ensure that systems remain secure. Because of this, we'd like to build out a dedicated AI information security program, with specialist leadership in-house directing our efforts on this problem.

Core responsibilities include:

  • Strategic Direction & Program Building: Own the development and execution of our strategy for supporting AI information security initiatives, focusing on the highest-impact interventions to reduce catastrophic risk. Lead the development of AI information security as a distinct program area, including establishing the program's vision, strategic direction, and building a team as the program grows.

  • Grantmaking & Portfolio Management:

    • Source and evaluate promising grants, contracts, and projects

    • Design and run funding calls (e.g. Requests for Proposals)

    • Write clear and compelling grant recommendations

    • Build and maintain strong relationships with current and potential grantees

  • Technical Advising: Provide expert technical advice on security-related proposals and strategic questions across Open Philanthropy's AI teams.

  • Network Development: Develop and maintain critical relationships spanning frontier AI labs, hyperscale cloud providers, government agencies, leading security consultancies, and academic institutions.

  • Program Building: Lead the development of AI information security as a distinct program area, including hiring and establishing the program's vision and strategic direction.

Who might be a good fit

You might be a great fit for this work if you:

  • Are strongly motivated to reduce catastrophic risks from advanced AI and see information security as a crucial intervention point for safer AI development.

  • Have deep professional experience (6-15+ years) in information security. Your knowledge of the security field will likely be ‘T-shaped’: some deep knowledge, lots of shallower knowledge. Highly relevant backgrounds include roles at:

    • Frontier AI companies in security-focused positions

    • Top-tier government agencies with cybersecurity missions (e.g., NSA, GCHQ, Unit 8200)

    • Hyperscale cloud providers (AWS, Google Cloud, Azure) in security roles

    • Elite security research teams or consultancies (e.g., Trail of Bits, NCC Group, Mandiant)

While we're looking for depth in at least one relevant domain, the specific area matters less than you having gone deep enough to have developed technical taste and critical intuition.

  • Have experience building and leading teams. While you may not have direct reports immediately, you'll likely hire and lead a team as the program grows.

  • Think critically about transformative AI scenarios and can reason through their profound security implications, especially for preventing worst-case outcomes.

  • Communicate exceptionally well, explaining complex technical security concepts clearly and accurately to both specialist and non-specialist audiences.

  • Demonstrate strong judgment and strategic thinking, navigating uncertainty about both technical feasibility and strategic impact, and understanding where security fits in the larger AI governance ecosystem.

  • Take ownership proactively, identifying what needs to happen and making it happen, even when the path forward isn't clearly defined.

Desirable but not essential: Broad familiarity with ML research, experience in policy development or advocacy, and familiarity with research on AI alignment and Control.

Above all, we are looking for people motivated to contribute to our mission of helping others as much as we can with the resources available to us. If this role aligns with your values and expertise, we encourage you to apply even if you don't meet every qualification listed above.

Process and timelines

Our application process will include:

  • An initial application that consists of answering a series of questions.

  • An initial 30-minute interview and a paid work test.

  • An interview with Alex Lawsen, Senior Program Associate on the team.

  • A series of final interviews with several Open Philanthropy team members, along with reference checks.

We expect to make offers by mid-August and strongly encourage candidates to let us know if they need to hear back from us sooner at any point during the process.

Please note that due to time constraints, we cannot give feedback during the early stages of the process, including on work tests. Thank you for your understanding.

Role details & benefits
  • Compensation: The baseline compensation for this role is $262,992.15, which would be distributed as a base salary of $239,992.15 and an unconditional 401(k) grant of $23,000. All compensation will be distributed in the form of take-home salary for internationally-based hires.

    • These compensation figures assume a remote location; there would be upward geographic adjustments for candidates based in San Francisco or Washington, D.C.

  • Time zones and location: You can work from anywhere but should be willing to overlap with the US East Coast timezone for at least 15 hours/week. We’d prefer someone who is based in the U.S. or open to traveling there periodically, but this isn’t a strict requirement.

    • We'll also consider sponsoring U.S. work authorization for international candidates (though we don't control who is and isn't eligible for a visa and can't guarantee visa approval).

  • Benefits: Our benefits package includes:

    • Excellent health insurance (we cover 100% of premiums within the U.S. for you and any eligible dependents) and an employer-funded Health Reimbursement Arrangement for certain other personal health expenses.

    • Dental, vision, and life insurance for you and your family.

    • Four weeks of PTO recommended per year.

    • Four months of fully paid family leave.

    • A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive.

    • A continual learning policy that encourages staff to spend time on professional development with related expenses covered.

    • Support for remote work — we'll cover a remote workspace outside your home if you need one, or connect you with an Open Philanthropy coworking hub in your city.

    • We can't always provide every benefit we offer U.S. staff to international hires, but we're working on it (and will usually provide cash equivalents of any benefits we can't offer in your country).

  • Start date: Flexible, though we'd prefer someone to start as soon as possible after receiving an offer.

We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.

If you need assistance or an accommodation due to a disability, or have any other questions about applying, please contact [email protected].

Please apply by 11:59 pm (Pacific Time) on Sunday, July 6, to be considered.

US-based Program staff are typically employed by Open Philanthropy Project LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.

Apply to this Job

Similar Jobs