Singularity | Artificial Intelligence (ai), Technology & Futurology

30 readers
1 users here now

About:

This sublemmy is a place for sharing news and discussions about artificial intelligence, core developments of humanity's technology and societal changes that come with them. Basically futurology sublemmy centered around ai but not limited to ai only.

Rules:
  1. Posts that don't follow the rules and don't comply with them after being pointed out that they break the rules will be deleted no matter how much engagement they got and then reposted by me in a way that follows the rules. I'm going to wait for max 2 days for the poster to comply with the rules before I decide to do this.
  2. No Low-quality/Wildly Speculative Posts.
  3. Keep posts on topic.
  4. Don't make posts with link/s to paywalled articles as their main focus.
  5. No posts linking to reddit posts.
  6. Memes are fine as long they are quality or/and can lead to serious on topic discussions. If we end up having too much memes we will do meme specific singularity sublemmy.
  7. Titles must include information on how old the source is in this format dd.mm.yyyy (ex. 24.06.2023).
  8. Please be respectful to each other.
  9. No summaries made by LLMs. I would like to keep quality of comments as high as possible.
  10. (Rule implemented 30.06.2023) Don't make posts with link/s to tweets as their main focus. Melon decided that the content on the platform is going to be locked behind login requirement and I'm not going to force everyone to make a twitter account just so they can see some news.
  11. No ai generated images/videos unless their role is to represent new advancements in generative technology which are not older that 1 month.
  12. If the title of the post isn't an original title of the article or paper then the first thing in the body of the post should be an original title written in this format "Original title: {title here}".
  13. Please be respectful to each other.

Related sublemmies:

[email protected] (Our community focuses on programming-oriented, hype-free discussion of Artificial Intelligence (AI) topics. We aim to curate content that truly contributes to the understanding and practical application of AI, making it, as the name suggests, “actually useful” for developers and enthusiasts alike.)

Note:

My posts on this sub are currently VERY reliant on getting info from r/singularity and other subreddits on reddit. I'm planning to at some point make a list of sites that write/aggregate news that this subreddit is about so we could get news faster and not rely on reddit as much. If you know any good sites please dm me.

founded 1 year ago
MODERATORS
51
5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
52
15
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 
 

Original title: The US Military Is Taking Generative AI Out for a Spin

Summary: The US military is testing five LLMs as part of an eight-week exercise run by the Pentagon’s digital and AI office. "It was highly successful. It was very fast," a US Air Force colonel is quoted as saying. "We did it with secret-level data," he adds, saying that it could be deployed by the military in the very near term.

53
 
 

Article: https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

Article summarizing the article above: https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

Copy of the summarization:

Google has updated its privacy policy to explicitly state it can use virtually anything you post online to enhance its AI tools, a change that raises intriguing privacy questions and has prompted reactions from platforms such as Twitter and Reddit.

Google's New Privacy Policy: Google has altered its privacy policy to state that it can scrape almost any content posted online for the advancement of its AI tools.

· It uses this data to improve existing services and develop new products, features, and technologies.

· The data harvested aids in training Google's AI models and building products like Google Translate, Bard, and Cloud AI.

Impact on Internet Users: This policy modification challenges conventional concepts of online privacy.

· It suggests that any public post on the internet could be used by Google

· This practice necessitates a shift in how we perceive online activity, focusing on how the information could be employed rather than who can see it.

Legal and Copyright Concerns: The usage of data from the internet to fuel AI systems raises legal and copyright issues.

· It remains uncertain whether such a practice is legal, with courts likely to address these new copyright issues in the coming years.

· This practice affects consumers in surprising ways, raising questions about data ownership.

Reactions from Other Platforms: Twitter and Reddit have responded to this AI-related issue by restricting access to their APIs.

· This action aimed to protect their intellectual property from data scraping but resulted in breaking third-party tools used to access these platforms.

· Controversies have ensued, such as Twitter contemplating charging public entities for tweets, and Reddit seeing a mass protest due to API changes disrupting the work of moderators.

Elon Musk's Stance on Web Scraping: Elon Musk has recently expressed concerns about web scraping.

· He blamed several Twitter mishaps on the company's need to prevent others from data extraction.

· Despite these claims, most IT experts believe these problems are likely due to management issues or technical difficulties.

54
 
 

For the first time in the world researchers at Tel Aviv University have encoded a toxin produced by bacteria into mRNA (messenger RNA) molecules and delivered these particles directly to cancer cells, causing the cells to produce the toxin—which eventually killed them with a success rate of 50%.

55
56
 
 

Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. In this work, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between tokens; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.

Link to the repo: https://github.com/microsoft/unilm

57
 
 

Original title of the article: People almost always get this simple math problem wrong: Can you solve it?

The question goes: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”

58
 
 

The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These "neuromorphic" networks, however, run on hardware that looks nothing like a biological brain, which limits performance.

Now, researchers from Osaka University and Hokkaido University plan to change this by creating neuromorphic "wetware." The work is published in the journal Advanced Functional Materials.

While neural-network models have achieved remarkable success in applications such as image generation and cancer diagnosis, they still lag far behind the general processing abilities of the human brain. In part, this is because they are implemented in software using traditional computer hardware that is not optimized for the millions of parameters and connections that these models typically require.

59
 
 

Imagine this: you're at a vibrant cocktail party 🍹, filled with the buzz of conversation and the clink of glasses 🍻. You're a laid-back observer 👀, tucked comfortably in a corner. Yet, you can still easily figure out the social relations between different people, understand what's going on, and even provide social suggestions by reading people's verbal and non-verbal cues.

If a large language model (LLM) could replicate this level of social aptitude, then we could say that it possesses certain social abilities. Curious how different LLMs perform when it comes to understanding and navigating social interactions? Check out these demos processed by AI models♦!

Site: https://chats-lab.github.io/KokoMind/

Martineski: If you know the exact date of when this was published let me know, I hate it when they do it like this:

60
1
aaaaaa (lemmy.fmhy.ml)
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
61
 
 

New theoretical research proves that machine learning on quantum computers requires far simpler data than previously believed. The finding paves a path to maximizing the usability of today's noisy, intermediate-scale quantum computers for simulating quantum systems and other tasks better than classical digital computers, while also offering promise for optimizing quantum sensors.

62
 
 

A team at the National Institute of Standards and Technology in Boulder, Colorado, has reported the successful implementation of a 400,000 pixel superconducting nanowire single-photon detector (SNSPD) that they say will pave the way for the development of extremely light-sensitive large-format superconducting cameras. Their paper, "A superconducting-nanowire single-photon camera with 400,000 pixels," was published in the preprint repository arXiv on June 15.

Researchers from the University of Colorado's Department of Physics and the Jet Propulsion Laboratory at the California Institute of Technology also participated in the project.

The camera is now the largest of its type. Its pixel array is 400 times greater than the previous largest photon camera. It can work in various light frequencies from the visible to ultraviolet and infrared range and capture images at super high-speed rates, in matters of picoseconds.

63
64
 
 

We could have AI models in a couple years that hold the entire internet in their context window.

65
66
 
 

A lot of billionaire money is going into preventing an AI apocalypse instead of actual pressing problems of humanity.

67
 
 

Not entirely unexpected since we're still in early days of AR/VR., but a little disappointing.

68
69
 
 
70
 
 

TL;DR: OpenAI announces a new team dedicated for researching superintelligence

71
 
 

We used ChatGPT, Text-To-Speech Synthesis, and a Raspberry Pi to create a digital assistant that can reply as almost anyone you can think of - like Joe Rogan, David Attenborough, or NDT.

We also talk at the end about the future of this technology, and AI as a whole - there's a large potential for something like this to impact almost every aspect of our daily lives, and it increasingly feels like we're on the precipice of a paradigm shift. Which is super exciting and also terrifying lol

I hope you find the video interesting, or at the least entertaining!

72
 
 
73
74
75
view more: ‹ prev next ›