1 big thing: Malware’s AI time bomb

source: axios.com (contributed by Bill Amshey)  | image: pexels.com

 

Hackers already have the AI tools needed to create the adaptable, destructive malware that security experts fear. But as long as their basic tactics — phishing, scams and ransomware — continue to work, they have little reason to use them.

Why it matters: Adversaries can flip that switch anytime, and companies need to prepare now. Continue reading “1 big thing: Malware’s AI time bomb”

AI Can Crack Your Passwords Fast—6 Tips To Stay Secure

 

source: forbes.com (contributed by Steve Page)  |  image: pexels.com

 

Do you think your trusty 8-character password is safe? In the age of AI, that might be wishful thinking. Recent advances in artificial intelligence are giving hackers superpowers to crack and steal account credentials. Researchers have demonstrated that AI can accurately guess passwords just by listening to your keystrokes. By analyzing the sound of typing over Zoom, the system achieved over 90% accuracy in some cases.

And AI-driven password cracking tools can run millions of guess attempts lightning-fast, often defeating weak passwords in minutes. It is no surprise, then, that stolen or weak passwords contribute to about 80% of breaches​.

The old password model has outlived its usefulness. As cyber threats get smarter, it is time for consumers to do the same.

AI Makes Cracking Passwords Easier Than Ever

Gone are the days when a hacker had to manually try “password123” or use basic tools to brute-force your account. Now, AI algorithms can crack passwords with frightening speed and sophistication. For example, according to Security Hero, AI-powered tools like PassGAN can crack 51% of common passwords in less than a minute.

Machine learning models can also automate “credential stuffing” attacks (trying breached passwords on other sites) much faster and more intelligently. Continue reading “AI Can Crack Your Passwords Fast…”

ChatGPT’s Deep Research just identified 20 jobs it will replace. Is yours on the list?

source: zdnet.com (contributed by Artemus founder, Bob Wallace)  |  image: pexels.com

 

After researching 24 sources in seven minutes, ChatGPT came up with the top jobs that might be on the chopping block.

This week, OpenAI launched its Deep Research feature which can synthesize content from across the web into one detailed report in minutes leveraging a version of the company’s latest model, o3

This feature is a powerful tool for workers, as it can save them hours by completing research autonomously. But can the technology’s underlying model replace workers? Yes, suggests Deep Research. Continue reading “ChatGPT’s Deep Research just identified 20 jobs…”

Chinese AI gets better — and cheaper

source: axios.com (contributed by Bill Amshey)  |  image: pixabay.com

 

Chinese AI makers have learned to build powerful models that perform almost as well as the best ones in the U.S. — for less money and with much less demand for energy, Axios’ Scott Rosenberg and Alison Snyder report.

  • V3, an open-source model developed by Chinese firm DeepSeek, performs about as well on various benchmark tests as OpenAI and Anthropic’s most advanced models.
  • DeepSeek says it cost just $5.6 million to train V3 — compared to the hundreds of millions of dollars American companies have spent to build and train their models.

🤖 Between the lines: The Biden administration has done a lot to advance AI in the U.S. and keep those advancements out of the Chinese government’s hands.

  • It has invested heavily in domestic manufacturing for powerful chips and new energy sources. And it has imposed tight export controls to prevent those chips from reaching China, including through third countries.
  • That seems to have worked in the short term, while spurring China to compete just as aggressively to develop its own tools.

A chilling, “catastrophic” warning

source: axios.com (contributed by Bill Amshey)  |  image: pixabay.com

 

Jake Sullivan — with three days left as White House national security adviser, with wide access to the world’s secrets — called us to deliver a chilling, “catastrophic” warning for America and the incoming administration:

  • The next few years will determine whether artificial intelligence leads to catastrophe — and whether China or America prevails in the AI arms race.

Why it matters: Sullivan said in our phone interview that unlike previous dramatic technology advancements (atomic weapons, space, the internet), AI development sits outside of government and security clearances, and in the hands of private companies with the power of nation-states, Jim VandeHei and Mike Allen write in a “Behind the Curtain” column.

  • Underscoring the gravity of his message, Sullivan spoke with an urgency and directness that were rarely heard during his decade-plus in public life.

Continue reading “A chilling, “catastrophic” warning”

The Promise of Artificial General Intelligence is Evaporating

source: mindmatters.ai (contributed by Artemus founder, Bob Wallace)  |  image: pexels.com

 

Revenue from corporate adoption of AI continues to disappoint and, so far, pales in comparison to the revenue that sustained the dot-com bubble — until it didn’t

hink back to when you took a science class in high school or college. Introductory physics, for example. There was one textbook and, if you learned the material in the book, you got a high grade in the class. If you were super serious, you might read a second textbook that reinforced what was in the first book and might even have added a few new concepts. A third book wouldn’t have added much, if anything. Reading a 10th, 20th, or 100th textbook would surely have been a waste of time.

Large language models (LLMs or chatbots) are like that when it comes to absorbing factual information. They don’t need to be told 10, 20, or 100 times that Abraham Lincoln was the 16th President of the United States, Paris is the capital of France, or that the formula for Newton’s law of universal gravitation is

This Hyper-Smelling AI Can Sniff Out Counterfeit Sneakers—and That’s Only the Beginning

source: fastcompany.com  |  image: pexels.com

 

Osmo, an AI startup focused on mapping scent, has an ambitious plan to use its sensor tech to find everything from fake shoes to tumors growing inside your body.

I want a tricorder,” Alex Wiltschko tells me on a Zoom call. Wiltschko, the founder of the AI company Osmo, is referring to the handheld device used by the Enterprise’s crew in its exploration across the universe. In Stark Trek, the tricorder can tell crew members everything they need to know about an object simply by holding it nearby.

One could say that Wiltschko and his team have created an alpha version of the fantasy device. His team has developed a backpack-sized machine equipped with a smelling sensor that uses artificial intelligence to identify counterfeit products by analyzing their chemical composition. Osmo has recently partnered with sneaker resale platforms to show that the high-tech sniff test is capable of identifying fakes with a high degree of accuracy. 

IT’S ALL ABOUT THE MOLECULES

Everything in the world has a smell, from clothes to cars to your body. Those scents are volatile molecules, or chemistry that “flies” off those objects and reaches our nostrils to tell us things. You experience this consciously and clearly when something is new to your nose, like smelling a new car or a pair of sneakers. Continue reading “Hyper-Smelling AI”

Winning the AI Race

source: axios.com (contributed by FAN, Bill Amshey)  |  image: unsplash.com

 

The Biden administration’s AI directive is a green light to the Pentagon, intelligence agencies and their eager suppliers.

  • The documents enshrine the technology as a defense imperative. Expect greater investment, including in energy and workforce, with check-ins along the way.
  • It also validates the high-risk, high-reward work of early movers.

Why it matters: This signals a more a hands-off approach, which should help allay private-sector worries about cumbersome guardrails.

What they’re saying: If the U.S. fails to deploy AI more extensively and at a quicker pace than its adversaries, advantages earned over decades in land, air, sea, space and cyber could be erased, national security adviser Jake Sullivan warned Thursday.

  • “Even if we have the best AI models, but our competitors are faster to deploy, we could see them seize the advantage in using AI capabilities against our people, our forces and our partners and allies,” he said at the National Defense University.
  • “We could have the best team but lose because we didn’t put it on the field.”

What we’re hearing: Defense industrial base players are generally pleased.

  • Contractors already embrace a software first, hardware second approach.
  • The White House messaging clarifies what’s fair game — and what’s out of bounds. Guidance should boost experimentation and adoption.

The bottom line: There are few “precedents for a document such as this one, which seeks to comprehensively state U.S. national security interests and strategy toward a transformative technology,” Gregory Allen, director of the Wadhwani AI Center at CSIS, told me.

  • “NSC-68, which defined U.S. early nuclear strategy, comes to mind.”

Deploying Deepfake Detection

source: cnet.com  |  image: pexels.com

 

Deepfake video, photo and audio programs have benefited from the same AI boost as other software programs, which is … worrisome, to say the least. But security software company McAfee is hoping AI can play a role in solving the problem. The company unveiled the McAfee Deepfake Detector this week, and folks with Lenovo’s new Copilot-Plus PCs will be the first to get the chance to try out the tool. It scans audio in videos you come across online to alert you to potential deepfakes, but it won’t work if the sound is off. It also can’t determine if photos are deepfakes.

I don’t mind admitting that deepfakes are one of the consequences of AI that keep me up at night. We’ve seen a lot of AI-generated content used for jokes and memes — remember that one of the pope in a puffy white coat? — but it can also be used maliciously, such as to spread political disinformation. So, for my two cents, any effort to take a closer look at questionable material online is a good one.

The rise of Perplexity AI, the buzzy new web search engine

source: zapier.com  |  image: wikipedia.com

 

Perplexity’s answer engine is altering the way we interact with the internet and might even one day challenge Google’s search dominance.

Perplexity calls itself a “Swiss Army Knife for information discovery and curiosity,” but it’s essentially an AI-powered search engine. Think of it as a mashup of ChatGPT and Google Search—though it’s not a direct replacement for either. Really, it’s the direction Google is trying to go with Gemini—but less chaotically implemented. 

It works like a chatbot: you ask questions, and it answers them. But it’s also able to seamlessly pull in information from recent articles. It indexes the web every day, so you can ask it about recent news, game scores, and other typical search queries. 

But Perplexity is also a kind of search engine. Instead of presenting you with a list of websites that match your query, Perplexity gives you a short summary answer along with the references it used to create it. In some cases, the summary will be all you need. In others, you’ll want to dive into the different sources.

While Perplexity can’t yet replace a traditional search engine, it’s surprisingly functional and effective if you work within its limits. Here’s what you need to know about it. Continue reading “The rise of Perplexity AI”