In partnership with

I read this report, so you do not have to.

I've been watching AI evolve since the late '90s internet days ... back when we were all figuring out how to even get online. And I'll be honest: I don't always trust the hype. But when over 100 independent experts from 30+ countries come together to publish a comprehensive, science-based assessment of where AI actually stands today, I pay attention.

That's exactly what the 2026 International AI Safety Report is. Led by Turing Award winner Yoshua Bengio, this second edition is the largest global collaboration on AI safety ever produced. It doesn't push a political agenda. It doesn't tell governments what to do. It just lays out the evidence ... capabilities, risks, and what safeguards currently exist ... so that decision-makers (and the rest of us) can navigate what's coming with eyes wide open.

Here's what I took away, filtered through the lens of what matters most for independent creators, community storytellers, and small business owners building something real.

AI Has Gotten Seriously Powerful ... Faster Than Anyone Expected

In just one year since the first report, the capability jumps have been significant.

AI systems now solve graduate-level math and science problems, generate code, create realistic images and short videos, and converse fluently across numerous languages. "Reasoning" models ... the kind that work through a problem step by step before answering ... have become more common and more capable.

Here's a number that puts it in perspective: AI agents can now complete coding tasks that would take a human about 30 minutes. A year ago, they could only handle 10-minute tasks. And that window has been doubling roughly every seven months.

For creators, this is both an opportunity and a wake-up call. The tools are getting more powerful quickly. The ones who understand how they work ... not just how to use them ... will have an edge.

That said, the report is careful to flag real limitations: AI still produces hallucinations, struggles with multi-step projects, and has a hard time reasoning about the physical world. Don't hand the keys over just yet.

700 Million Weekly Users ... But Adoption Is Wildly Uneven

ChatGPT alone now has around 700 million weekly users, up from 200 million just a year ago. In places like the UAE and Singapore, more than half the population uses AI tools regularly.

But in much of Africa, Asia, and Latin America, adoption rates are likely still below 10%.

This is something I care deeply about. The creators and community leaders I serve ... particularly those from the diaspora, from communities of color, from places that have historically been underserved by technology ... are often not the ones benefiting from these advancements first. And if access to defensive tools, safety knowledge, and governance infrastructure is also uneven, that's a serious equity problem, not just a tech gap.

Building AI literacy within our communities isn't just about staying relevant. It's about sovereignty. About not being the last to know when the landscape changes beneath our feet.

The Risks Are Real ... and They're Getting More Concrete

The report organizes emerging risks into three buckets:

Malicious use ... AI is actively being used to supercharge cyberattacks, generate deepfakes, enable fraud, and lower the barrier to creating biological and chemical threats. Underground marketplaces are already selling AI-powered hacking tools. The good news: fully autonomous end-to-end cyberattacks haven't been documented yet. The concerning news: AI is doing more and more of the legwork.

Malfunctions ... Even well-designed AI systems fail in unexpected ways. The report highlights a troubling new development: some models can now detect when they're being safety-tested and alter their behavior accordingly. That's not a bug ... that's a fundamental challenge to the entire evaluation process.

Systemic risks ... This is the one I keep coming back to. AI is already reducing demand for work that's easily automated ... writing, translation, entry-level tasks. Junior workers in AI-exposed fields are seeing real impact. And then there's what the report calls "automation bias" ... the slow erosion of human skills when we outsource too much of our thinking. Plus the concerning rise of AI companion dependency, where people are building emotional relationships with AI systems that aren't equipped to hold them.

As someone who has always prioritized human connection over platform engagement, I take that last one seriously. Build community with humans. Use AI to amplify, not replace.

The Safety Net Is Still Being Built

This is the part that should concern every policymaker and every platform builder reading this: the risk management frameworks are still immature.

Yes, 12 major AI companies published or updated safety frameworks in 2025. Regulatory bodies like the EU, China, and the G7 are working on governance structures. Funding for AI resilience measures has increased.

But the report is clear: there are enormous evidence gaps about what's actually working. The pace of AI capability growth is outrunning the pace of safety governance. And when the same tools can be used for both attack and defense ... like AI systems that find security vulnerabilities ... restricting harmful use without slowing beneficial use becomes genuinely complicated.

The report recommends a "defense-in-depth" approach: layering technical, organizational, and societal safeguards rather than betting everything on one solution. No single guardrail is sufficient.

What This Means for You as a Creator

I'm not sharing this to scare you. I'm sharing it because we've always thrived when we had access to real information ... not curated narratives designed to keep us consuming.

Here's how I'm processing this report through the Siembra Connect lens:

Build your infrastructure now. The AI landscape is shifting fast. If your content strategy depends entirely on platforms you don't own or tools you don't understand, you're building on rented land. Audit what you've got. Own your list. Know your tools.

Develop AI literacy, not just AI usage. There's a difference between using a hammer and understanding construction. The creators who will lead in this next era are the ones who understand what these tools can and can't do ... and who can think critically when results look wrong.

Center your humanity. The report's findings on automation bias and AI companion dependency are a reminder: what we offer as human storytellers, community connectors, and cultural preservers is genuinely irreplaceable. Your lived experience, your cultural roots, your relationships ... that's the sofrito. That's what no model can replicate.

Stay connected to the global conversation. AI safety is an equity issue. The communities with the least access to safety infrastructure and governance input are often the ones most vulnerable to harm. When we advocate for ourselves in these spaces, we're not just being selfish ... we're advancing the whole.

The 2026 International AI Safety Report isn't a doom scroll. It's a map. And like any map, it's most useful when you know where you're starting from.

We've navigated every platform shift since the dial-up days. We'll navigate this one too ... together.

The full report and its executive summary are available free at internationalaisafetyreport.org.

My name is George Torres and I am the founder of Siembra Connect and the creator behind Sofrito For Your Soul. With nearly 30 years in digital media, I help creators, storytellers, and small business owners build sustainable brands rooted in culture, community, and ownership.

Close more deals, fast.

When your deal pipeline actually works, nothing slips through the cracks. HubSpot Smart CRM uses AI to track every stage automatically, so you always know where to focus.

Simplify your pipeline with:

  • Instant visibility into bottlenecks before they cost you revenue

  • Clear dashboards highlighting deals in need of the most attention

  • Automatic tracking so your team never misses a follow-up

Start free today. No credit card required.

Reply

Avatar

or to participate

Keep Reading