Nano Banana Pro Is the Best AI Image Tool I’ve Tested. It’s Also Deeply Troubling

source: cnet.com  :  image: google

 

You can’t talk about AI image generation without including Google’s nano banana models, for good reason. The two versions, the original (Gemini 2.5 Flash Image) and the new pro (Gemini 3 Pro Image), have only been around for a couple of months, but they’ve quickly redefined what’s possible with AI image generation and editing. 

The pro version uses Gemini 3’s reasoning model to power results. That means it takes a little bit longer to generate, but the images are more detailed. You can also add swaths of legible text to your images, an industry first. The pro version is the best AI image generation tool CNET has ever tested, hands down. But that isn’t necessarily a good thing. Continue reading “Nano Banana Pro Is the Best AI Image Tool I’ve Tested. It’s Also Deeply Troubling”

A New Type of AI Malware Threatens Smart Homes, But These Security Habits Can Help

source: cnet.com  |  image: pexels.com

 

The rise of promptware means cybercriminals have new ways to hack smart homes. New security methods are required to fight back

 

Old-school home hacking is typically ineffective — it takes too much effort for the average burglar and modern devices are better protected against mass internet attacks (especially if you keep firmware updated). But now there’s a new trick for cybercriminals to use: It’s called prompt injections — or promptware — and it can make AI do things you never wanted it to. 

In a smart home, that means that promptware can force AI to seize control of devices, doing everything from turning up the heat and switching off lights to unlocking smart locks

Experts are still learning what dangers promptware presents to LLM-style AI and the many places it can hide. Meanwhile, there are steps you can take to help stay safe and alert. Here’s what I suggest. Continue reading “A New Type of AI Malware Threatens Smart Homes…”

AI Data Centers Are Massive, Energy-Hungry and Headed Your Way

source: cnet.com  | image: pixabay.com

 

Behind your ChatGPT and Gemini queries, there’s a land grab happening to keep up the fevered pace of gen AI’s growth. The consequences are significant.

 

From the outside, this nondescript building in Piscataway, New Jersey, looks like a standard corporate office surrounded by lookalike buildings. Even when I walk through the second set of double doors with a visitor badge slung around my neck, it still feels like I’ll soon find cubicles, water coolers and light office chatter.

Instead, it’s one brightly lit server hall after another, each with slightly different characteristics, but all with one thing in common — a constant humming of power. 

The first area I see has white tiled floors and rows of 7-foot-high server racks protected by black metal cages. Inside the cage structure, I feel cool air rushing from the floor toward the servers to prevent overheating. The wind muffles my tour guide’s voice, and I have to shout over the noise for him to hear me. 

Continue reading “AI Data Centers Are Massive, Energy-Hungry and Headed Your Way”

Beyond ChatGPT: Shadow AI Risks Lurk in SaaS Tools

source: technewsworld.com  |  image: pexels.com

 

Unapproved use of ChatGPT and other generative AI tools is creating a growing cybersecurity blind spot for businesses. As employees adopt these technologies without proper oversight, they may inadvertently expose sensitive data — yet many managers still underestimate the risk and delay implementing third-party defenses.

This type of unsanctioned technology use, known as shadow IT, has long posed security challenges. Now, its AI-driven counterpart — shadow AI — is triggering new concerns among cybersecurity experts. Continue reading “Beyond ChatGPT: Shadow AI Risks Lurk in SaaS Tools”

CIA Leveraging Digital Transformation Tools in HUMINT Missions

source: executivegov.com (contributed by FAN, Steve Page)  |  Image: pixabay.com

 

One of the United States’ most secretive agencies is using digital transformation tools such as AI and human-machine teaming as it tries to solve the nation’s toughest national security problems.

Since the CIA established the Directorate of Digital Innovation, or DDI, in 2015, the agency has increasingly encouraged entwining digital technology into its core human intelligence, or HUMINT, mission, where intelligence is obtained from human sources. Juliane Gallina, the CIA’s deputy director for digital innovation, said every DDI mission is guided by human-machine teaming, which starts with data and is improved with AI before being put to use by CIA agents. Continue reading “CIA Leveraging Digital Transformation Tools in HUMINT Missions”

Meet the AI Fraud Fighters: A Deepfake Granny, Digital Bots and a YouTube Star

source: cnet.com  |  image: pexels.com

 

It was almost an hour into our Google Meet call. I was interviewing Kitboga, a popular YouTube scam baiter with nearly 3.7 million subscribers, known for humorously entrapping fraudsters in common scams while livestreaming.

“I assume I’m talking to Evan Zimmer,” he says with a mischievous glance, his eyes exposed without his trademark aviator sunglasses on. We were close to the end of our conversation when he realized that my image and audio could have been digitally altered to impersonate me this whole time. “If I’m completely honest with you, there was not a single moment where I thought you could be deepfaking,” he says.

Continue reading “Meet the AI Fraud Fighters: A Deepfake Granny+”

1 big thing: Malware’s AI time bomb

source: axios.com (contributed by Bill Amshey)  | image: pexels.com

 

Hackers already have the AI tools needed to create the adaptable, destructive malware that security experts fear. But as long as their basic tactics — phishing, scams and ransomware — continue to work, they have little reason to use them.

Why it matters: Adversaries can flip that switch anytime, and companies need to prepare now. Continue reading “1 big thing: Malware’s AI time bomb”

AI Can Crack Your Passwords Fast—6 Tips To Stay Secure

 

source: forbes.com (contributed by Steve Page)  |  image: pexels.com

 

Do you think your trusty 8-character password is safe? In the age of AI, that might be wishful thinking. Recent advances in artificial intelligence are giving hackers superpowers to crack and steal account credentials. Researchers have demonstrated that AI can accurately guess passwords just by listening to your keystrokes. By analyzing the sound of typing over Zoom, the system achieved over 90% accuracy in some cases.

And AI-driven password cracking tools can run millions of guess attempts lightning-fast, often defeating weak passwords in minutes. It is no surprise, then, that stolen or weak passwords contribute to about 80% of breaches​.

The old password model has outlived its usefulness. As cyber threats get smarter, it is time for consumers to do the same.

AI Makes Cracking Passwords Easier Than Ever

Gone are the days when a hacker had to manually try “password123” or use basic tools to brute-force your account. Now, AI algorithms can crack passwords with frightening speed and sophistication. For example, according to Security Hero, AI-powered tools like PassGAN can crack 51% of common passwords in less than a minute.

Machine learning models can also automate “credential stuffing” attacks (trying breached passwords on other sites) much faster and more intelligently. Continue reading “AI Can Crack Your Passwords Fast…”

ChatGPT’s Deep Research just identified 20 jobs it will replace. Is yours on the list?

source: zdnet.com (contributed by Artemus founder, Bob Wallace)  |  image: pexels.com

 

After researching 24 sources in seven minutes, ChatGPT came up with the top jobs that might be on the chopping block.

This week, OpenAI launched its Deep Research feature which can synthesize content from across the web into one detailed report in minutes leveraging a version of the company’s latest model, o3

This feature is a powerful tool for workers, as it can save them hours by completing research autonomously. But can the technology’s underlying model replace workers? Yes, suggests Deep Research. Continue reading “ChatGPT’s Deep Research just identified 20 jobs…”

Chinese AI gets better — and cheaper

source: axios.com (contributed by Bill Amshey)  |  image: pixabay.com

 

Chinese AI makers have learned to build powerful models that perform almost as well as the best ones in the U.S. — for less money and with much less demand for energy, Axios’ Scott Rosenberg and Alison Snyder report.

  • V3, an open-source model developed by Chinese firm DeepSeek, performs about as well on various benchmark tests as OpenAI and Anthropic’s most advanced models.
  • DeepSeek says it cost just $5.6 million to train V3 — compared to the hundreds of millions of dollars American companies have spent to build and train their models.

🤖 Between the lines: The Biden administration has done a lot to advance AI in the U.S. and keep those advancements out of the Chinese government’s hands.

  • It has invested heavily in domestic manufacturing for powerful chips and new energy sources. And it has imposed tight export controls to prevent those chips from reaching China, including through third countries.
  • That seems to have worked in the short term, while spurring China to compete just as aggressively to develop its own tools.