US, Britain and other countries ink ‘secure by design’ AI guidelines

US, Britain and other countries ink ‘secure by design’ AI guidelines



The United States, the United Kingdom, Australia and 15 other countries have released global guidelines to help protect artificial intelligence (AI) models from being tampered with, urging companies to make their models “secure by design.”

On Nov. 26, the 18 countries released a 20-page document outlining how AI firms should handle their cybersecurity when developing or using AI models, as they claimed “security can often be a secondary consideration” in the fast-paced industry.

The guidelines consisted of mostly general recommendations such as maintaining a tight leash on the AI model’s infrastructure, monitoring for any tampering with models before and after release and training staff on cybersecurity risks.

Not mentioned were certain contentious issues in the AI space, including what possible controls there should be around the use of image-generating models and deep fakes or data collection methods and use in training models — an issue that’s seen multiple AI firms sued on copyright infringement claims.

Ledger

“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time,” U.S. Secretary of Homeland Security Alejandro Mayorkas said in a statement. “Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.”

Related: EU tech coalition warns of over-regulating AI before EU AI Act finalization

The guidelines follow other government initiatives that weigh in on AI, including governments and AI firms meeting for an AI Safety Summit in London earlier in November to coordinate an agreement on AI development.

Meanwhile, the European Union is hashing out details of its AI Act, which will oversee the space, and U.S. President Joe Biden issued an executive order in October setting standards for AI safety and security — though both have seen pushback from the AI industry claiming they could stifle innovation.

Other co-signers to the new “secure by design” guidelines include Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea and Singapore. AI firms, including OpenAI, Microsoft, Google, Anthropic and Scale AI, also contributed to developing the guidelines.

Magazine: AI Eye: Real uses for AI in crypto, Google’s GPT-4 rival, AI edge for bad employees





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

You have not selected any currency to display

Pin It on Pinterest

Crypto-Moon
Ledger
Crypto-Moon
US, Britain and other countries ink ‘secure by design’ AI guidelines
Ledger
Fiverr
RTFKT’s CloneX avatars reappear after issue blacks out NFTs
Prosecutors seek over 6 years prison for Mango Markets exploiter
CoinGecko Turns 11: Aimann Faiz Talks Rebrand, Business Model, and Market Outlook
US Bitcoin ETFs clock biggest inflows since January as crypto markets gain
World Network’s Chief Architect on Building a Human-First Internet
Charles Schwab CEO eyes spot Bitcoin trading by April 2026
Blockfi
Paxful
RTFKT’s CloneX avatars reappear after issue blacks out NFTs
CME Group plans to debut XRP futures on May 19
Whales Increase Holdings by 12% Despite Market Downturn
Alert: Bitcoin Mining Could Collapse Paraguay’s Power Grid by 2029
Did AI Just Give a Wild Preview of What the Future of Humanity Will Look Like?
RTFKT’s CloneX avatars reappear after issue blacks out NFTs
CME Group plans to debut XRP futures on May 19
Whales Increase Holdings by 12% Despite Market Downturn
Alert: Bitcoin Mining Could Collapse Paraguay’s Power Grid by 2029