Tech
Want to spot a deepfake? Focus on the eyes
A technique from astronomy could help detect deepfakes by spotting unrealistic reflections in the eyes of AI-generated images.
By Ananya
Come explore with us!
A technique from astronomy could help detect deepfakes by spotting unrealistic reflections in the eyes of AI-generated images.
The racism, sexism, ableism and other biases common in bot-made images may lead to harm and discrimination in the real world.
Experts worry that by making it harder to tell what’s true, AI can threaten people’s reputations, health, fair elections and more.
Unlike people, this type of artificial intelligence isn’t good at learning concepts that it can apply to new situations.
Today’s bots can’t turn against us, but they can cause harm. “AI safety” aims to train this tech so it will always be honest, harmless and helpful.
Researchers break chatbots in order to fix them. This so-called red-teaming is an important way to improve AI’s behavior.
Scammers can use AI to create deepfake mimics of people’s voices. AntiFake could make that type of trick much harder to pull off.
New research used the game Overcooked to show how AI can learn to collaborate with — or manipulate — us.
Supercomputing and AI cut the early discovery steps from decades to just 80 hours. The process led to a new solid electrolyte.
Energy demands of ChatGPT and similar AI tools can threaten Earth’s climate. So researchers have begun redesigning how to run data centers and build AI.