Skip to main content

AI Regulation in the Public Sector: Regulating Governments Use of AI

By April 19, 2024July 30th, 2024AI Chatbot News

How the White House sees the future of safeguarding AI

Secure and Compliant AI for Governments

Traditionally, large language models powering generative AI tools were available exclusively in the commercial cloud. However, Microsoft has designed a new architecture that enables government agencies to access these language models from Azure Government securely. This ensures that users can maintain the security required for government cloud operations. At AWS, we’re excited about generative AI’s potential to transform public sector organizations of all sizes.

What is the difference between safe and secure?

‘Safe’ generally refers to being protected from harm, danger, or risk. It can also imply a feeling of comfort and freedom from worry. On the other hand, ‘secure’ refers to being protected against threats, such as unauthorized access, theft, or damage.

For example, in an AI-powered process automation solution, the entire business process and all underlying systems should be equally secure and compliant – not just the AI-enabled tasks inside of it. Copyright infringement is another unresolved issue with large language models like GPT4. CogAbility’s best-in-class chatbot technology uses 8 different methods https://www.metadialog.com/governments/ to deliver the right answer to each constituent’s question – including carefully-worded responses approved by management where necessary. This data can also be a lucrative target for cyber attackers who seek to steal or manipulate sensitive information. Most AI systems today rely on large amounts of data to learn, predict, and improve themselves over time.

Staff Shortage, Limited Budgets, and Antiquated Systems: The Federal Government’s Need for Conversational AI

Further, an already controversial application of AI, facial recognition, is being used by law enforcement to identify suspects and has recently garnered much attention due to the wrongful arrest of a man in Georgia who was mistaken for a fugitive by Louisiana authorities’ racial recognition technology. With the Gender Shades project revealing the inaccuracies of facial recognition technology for darker-skinned individuals and both the victim and fugitive being Black, this highlights the need to ensure that AI systems, particularly those used in high-risk contexts, are not biased and are accurate for all subgroups. As such, the UK’s Equality and Human Rights Commission has called for suspending facial recognition in policing in England and Wales, with similar action being taken in Washington’s city of Bellingham and Alabama.

The generative AI boom has reached the US federal government, with Microsoft announcing the launch of its Azure OpenAI Service that allows Azure government customers to access GPT-3 and 4, as well as Embeddings. Many people do not understand that when you submit queries and training data to a large language model owned by someone else, you are explicitly giving them the right to use this data in future generations of the technology. For example, although OpenAI’s GPT4 model is much better than earlier versions at generating false positives (aka, “lying”), the model still gets facts wrong 10% of the time.

Ensure Notebooks are Secure

We believe the likelihood for this is high enough to warrant their targeted regulation. Currently there are no broad-based regulations focusing on AI safety, and the first set of legislation by the European Union is yet to become law as lawmakers are yet to agree on several issues. From a political standpoint, a difficulty in gaining acceptance of this policy is the fact that stakeholders will view this as an impediment to their development and argue that they should not be regulated either because 1) it will place an undue burden on them, or 2) they do not fall into a “high-risk” use group. Regulators must balance security concerns with the burdens placed upon stakeholders through compliance. From an implementation standpoint, a difficulty in implementing this policy will be managing the large number and disparate nature of entities, ranging from the smallest startups to the largest corporations, that will be implementing AI systems. Because different stakeholders face unique challenges that may not be applicable in other areas, regulators should tailor compliance to their constituents in order to make the regulation germane to their industry’s challenges.

New Initiative Seeks to Bring Collaboration to AI Security – Duo Security

New Initiative Seeks to Bring Collaboration to AI Security.

Posted: Tue, 12 Dec 2023 08:00:00 GMT [source]

Effective leadership also means pioneering those systems and safeguards needed to deploy technology responsibly — and building and promoting those safeguards with the rest of the world. My Administration will engage with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges. The unfettered building of artificial intelligence into these critical aspects of society is weaving a fabric of future vulnerability. Policymakers must begin addressing this issue today to protect against these dangers by creating AI security compliance programs. These programs will create a set of best practices that will ensure AI users are taking the proper precautionary steps to protect themselves from attack.

To mitigate the creation of fake news, deep fakes, and the like, regulators in the US and EU are planning to create mechanisms to help end users distinguish the origin of content. While there’s no law yet, the recent executive order in the U.S. requires the Department of Commerce to issue guidance on tools and best practices for identifying AI-generated synthetic content. AI regulation in the US has been under great debate by US policymakers regarding how AI development and usage should be governed and what Secure and Compliant AI for Governments degree of legal and ethical oversight is needed. To date, the US has been strong in empowering existing governmental agencies to ensure law enforcement around AI. Led by Nic Chaillan, the Ask Sage team leverages extensive industry experience to address the unique challenges and requirements of its clients. Currently, over 2,000 government teams and numerous commercial contractors benefit from Ask Sage’s expertise and security features, including data labeling capabilities and zero-trust cybersecurity.

Secure and Compliant AI for Governments

For the riskiest systems—such as those that could cause catastrophic damage if misused—we might need a “safety case” regime, where companies make an affirmative case to a regulator that these systems do not pose unacceptable risks to society. Much like in other safety-critical industries, such as pharmaceuticals, aviation, and nuclear power, it should be the companies’ responsibility to prove their products are safe enough, for example, via a broad range of safety evaluations. The role of regulators should be to probe the evidence presented to them and determine what risks are acceptable. Flipping the script—disallowing only those models that have been proved unsafe by the regulator—appears inappropriate, as the risks are high and industry has far more technical expertise than the regulator. Effective frontier AI regulation would require that developers of the most capable systems make a substantial effort, using a significant amount of resources, to understand the risks their systems might pose—in particular by evaluating whether their systems have dangerous capabilities or are insufficiently controllable. These risk assessments should receive thorough external scrutiny from independent experts and researchers and inform regulatory decisions about whether new models are deployed and, if so, with what safeguards.

The Protect AI platform provides Application Security and ML teams the visibility and manageability required to keep your ML systems and AI applications secure from unique AI vulnerabilities. Whether your organization is fine tuning an off-the-shelf Generative AI foundational model, or building custom ML models, our platform empowers your entire organization to embrace a security-first approach to AI. Deepset has long been committed to helping organizations navigate the evolving AI regulatory landscape.

Further, once adversaries develop an attack, they may exercise extreme caution in their application of it in order to not arouse suspicion and to avoid letting their opponent know that its systems have been compromised. In this respect, there may be no counter-indications to system performance until after the most serious breach occurs. As content filters are drafted into these battles, there will be strong incentives both to attack them and to generate tools making these attacks easier to execute. Adversaries have already seen the power of using digital platforms in pursuit of their mission. ISIS organically grew an international following and successfully executed a large-scale recruitment program using social media.

As creators of this technology, Microsoft is careful in ensuring that the ethical use of AI is being practiced and that other organizations developing derivative products on top of this technology do the same. With Azure Open AI, organizations can enable sensitivity controls in their Azure tenant to maintain data security by ensuring it’s isolated to your Azure tenant. Mitigating the risks means improving your AI literacy – and you must be ready to get your hands dirty. You need to understand how it functions and test it, then educate people on how to use it and how not to use it to ensure your workforce can safely use it. If organizations aren’t mindful of these challenges, it will affect real people involved in data-driven decisions derived from these results.

Secure and Compliant AI for Governments

In some settings, attacks on physical objects may require larger, coarser attack patterns. This is because these physical objects must first be digitized, for example with a camera or sensor, to be fed into the AI algorithm, a process that can destroy finer level detail. However, even with this digitization requirement, attacks may still be difficult to perceive. The “attack turtle” that is incorrectly classified as a rifle in the example shown below is one such example of a physical attack that is nearly invisible. On the other end of the visibility axis are “imperceivable” attacks that are invisible to human senses.

When you use public chatbots like ChatGPT, anything you put in can become accessible outside your data environment. Or, as you go deeper into AI like Microsoft Copilot, you can establish proper data security practices by enabling access controls and proper visibility into what your users can access to ensure that AI is only doing what you want. To tackle the challenges of the ethical use of AI, it’s then critical for organizations to include all communities involved – more importantly, minorities and other communities – not only in the research phase but also in the governance aspect of their programs. While generative AI like ChatGPT or diffusion models for imaging like DALL-E open various opportunities for efficiency and productivity, they can also be used to spread misinformation. This raises complex human social factors that require policy and possibly regulatory solutions. Or, more relevantly, resembling the online personal data collection situation in the late 2000s, when nearly everyone would collect all they could lay their hands on.

Responsible AI is our overarching approach that has several dimensions such as ‘Fairness’, ‘Interpretability’, ‘Security’, and ‘Privacy’ that guide all of Google’s AI product development. (xxix)    the heads of such other agencies, independent regulatory agencies, and executive offices as the Chair may from time to time designate or invite to participate. (f)  To facilitate the hiring of data scientists, the Chief Data Officer Council shall develop a position-description library for data scientists (job series 1560) and a hiring guide to support agencies in hiring data scientists. (ix)    work with the Security, Suitability, and Credentialing Performance Accountability Council to assess mechanisms to streamline and accelerate personnel-vetting requirements, as appropriate, to support AI and fields related to other critical and emerging technologies.

  • In addition to a technical focus on securing models, research attention should also focus on creating testing frameworks that can be shared with industry, government, and military AI system operators.
  • Whether you need to caption a legislative proceeding, municipal council meeting, press briefing or an ad campaign, our solutions make it easy and cost-effective.
  • Additionally, the memorandum will direct actions to counter potential threats from adversaries and foreign actors using AI systems that may jeopardize U.S. security.
  • To reduce the chance of bioterrorism attacks, access to systems that could identify novel pathogens may need to be restricted to vetted researchers.

For example, CrowdStrike now offers a generative AI security analyst called Charlotte AI that uses high-fidelity security data in a tight human feedback loop to simplify and speed investigations, and react quickly to threats. While AI is not new, and the government has been working with AI for quite some time, there are still challenges with AI literacy and the fear of what risks AI can bring to organizations. The problem was that when ChatGPT came out and captured the world’s attention, there was widespread sensationalism in this “new” technology.

In the UK, National Health Service (NHS) formed an initiative to collect data related to COVID patients to develop a better understanding of the virus. Through various partnerships, the NHS set up the National COVID-19 Chest Imaging Database (NCCID), an open-source database of chest X-rays of COVID patients across the UK. This initiative aimed to develop deep learning techniques meant to provide better care for hospitalized COVID-19 patients. This includes monitoring weight, height, blood glucose, stress levels, heart rate, etc., and feeding this information to AI healthcare systems, which can notify doctors of any possible risks. There is little doubt that the emerging science of artificial intelligence is continuing to advance and grow, both in capabilities and the number of things that AI is being tasked with doing. And this is happening at lightning speed, despite there being little to no regulation in place to govern its use.

Where is AI used in defence?

One of the most notable ways militaries are utilising AI is the development of autonomous weapons and vehicle systems. AI-powered crewless aerial vehicles (UAVs), ground vehicles and submarines are employed for reconnaissance, surveillance and combat operations and will take on a growing role in the future.

What is the AI government called?

Some sources equate cyberocracy, which is a hypothetical form of government that rules by the effective use of information, with algorithmic governance, although algorithms are not the only means of processing information.

danblomberg

Author danblomberg

More posts by danblomberg