this post was submitted on 20 May 2024
43 points (100.0% liked)

United Kingdom

4109 readers
189 users here now

General community for news/discussion in the UK.

Less serious posts should go in [email protected] or [email protected]
More serious politics should go in [email protected].

Try not to spam the same link to multiple feddit.uk communities.
Pick the most appropriate, and put it there.

Posts should be related to UK-centric news, and should be either a link to a reputable source, or a text post on this community.

Opinion pieces are also allowed, provided they are not misleading/misrepresented/drivel, and have proper sources.

If you think "reputable news source" needs some definition, by all means start a meta thread.

Posts should be manually submitted, not by bot. Link titles should not be editorialised.

Disappointing comments will generally be left to fester in ratio, outright horrible comments will be removed.
Message the mods if you feel something really should be removed, or if a user seems to have a pattern of awful comments.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 6 months ago

This is the best summary I could come up with:


Guardrails to prevent artificial intelligence models behind chatbots from issuing illegal, toxic or explicit responses can be bypassed with simple techniques, UK government researchers have found.

The UK’s AI Safety Institute (AISI) said systems it had tested were “highly vulnerable” to jailbreaks, a term for text prompts designed to elicit a response that a model is supposedly trained to avoid issuing.

The AISI said it had tested five unnamed large language models (LLM) – the technology that underpins chatbots – and circumvented their safeguards with relative ease, even without concerted attempts to beat their guardrails.

The research also found that several LLMs demonstrated expert-level knowledge of chemistry and biology, but struggled with university-level tasks designed to gauge their ability to perform cyber-attacks.

The research was released before a two-day global AI summit in Seoul – whose virtual opening session will be co-chaired by the UK prime minister, Rishi Sunak – where safety and regulation of the technology will be discussed by politicians, experts and tech executives.

The AISI also announced plans to open its first overseas office in San Francisco, the base for tech firms including Meta, OpenAI and Anthropic.


The original article contains 533 words, the summary contains 190 words. Saved 64%. I'm a bot and I'm open source!