Are we entrusting too much coding to AI, too soon? The results of two new studies, along with fresh concerns expressed elsewhere, suggest we might be. 🤖
Exhibit A is a think piece that achieved the remarkable feat of eliciting a thoughtful, non-provocative response on X from Elon Musk, who compared the collective atrophy of basic programming skills to the loss of human muscle memory when it comes to navigating cities in the Google Maps era. “We’re at this weird inflection point in software development,” began the blog post, which was also picked up by Futurism and the New York Times. “Every junior dev I talk to has Copilot or Claude or GPT running 24/7,” continued Namanyay Goel, a developer and entrepreneur. “They’re shipping code faster than ever. But when I dig deeper into their understanding of what they’re shipping? That’s where things get concerning,” he continued, before warning ominously of long-term adverse consequences of this rush to over-automate software development. 💡
Tech and cyber expert Andy Ellis expressed similar concerns in the CISO Series Podcast. In an episode titled ‘Our Developers’ New Motto is 'LLM Take the Wheel’, he cited Joel Spolsky’s 2002 article on ‘The Law of Leaky Abstractions’ in detailing risks posed by an over-reliance on LLMs for software development: “When you see your code behave in a certain way, you might be like: ‘Oh, I probably have a memory leak, or I’m not running on big enough hardware’, whatever it is. And when you lose that understanding of the knowledge of complexity of what’s going on underneath it, you sort of don’t have the ability to reason about what’s happening. And if you let AI write your code […] that’s an abstraction barrier between your prompt engineering and some other level of code, which then has more abstraction barriers below it.” 🧩
Keeping developers connected to their code – especially the how to avoid insecure code and its consequences – is something our Bug Bounty customers often strive for by leveraging vulnerability reports for internal training purposes. The insights our ethical hackers provide, notably through proofs of concept for bugs and suggested mitigations, can teach devs secure development best practices and reduce the number of security flaws created. 🧠
Duplication is bad, duplication is bad
Back to bashing AI now and new research revealing how AI coding assistants might be eroding code quality. Based on analysis of 211 million lines of code, a study from GitClear found that there were eight times the number of code blocks with more than four duplicated lines in 2024 compared to 2023. Higher code duplication and reduced refactoring is generally seen as leading to higher maintenance costs, lower code readability and a wider proliferation of vulnerabilities. ⚠️
If this was heartening news for coders worried about being eclipsed and ultimately replaced by AI, then they will be further cheered by an admission by OpenAI researchers that even frontier models “are still unable to solve the majority” of coding tasks. As reported by Futurism, neither OpenAI’s o1 and GPT-4o models nor Anthropic’s Claude 3.5 Sonnet could match human devs in benchmarking tests around “resolving bugs and implementing fixes to them, or management tasks that saw the models trying to zoom out and make higher-level decisions”. 🧐
Inscrutable AI
The black-box nature of AI outputs is a problem for security researchers just as it is for developers. In ‘The Pitfalls of 'Security by Obscurity’ And What They Mean for Transparent AI’, researchers from New York University have argued that excessive (albeit partly understandable) secrecy about the inner workings of AI models is undermining security research, especially when their complexity makes transparency in this domain only more imperative. The researchers propose measures such as third-party audits, structured vulnerability disclosure programs, and the creation of shared benchmarks and datasets to evaluate AI safety and fairness. ⬛
AI also comes up (but of course it does!) in an interesting interview with Booking.com’s chief security officer (CSO), but let’s take a break from AI and highlight another noteworthy quote from the Q&A: “My biggest challenge – but it also applies to all CSOs and CISOs – is being able to articulate what are technical attacks, technical vulnerabilities into business impact,” Marnie Wilking told Infosecurity Magazine. “Being able to go back and forth and bridge that communication gap is a very important skill set but also a very difficult job. At the end of the day, what the leadership team wants to know, what we should be worried about is how it impacts our partners, our customers and our ability to deliver to customers.” 👨💼 (Incidentally, understanding the true business impact of vulnerabilities is a cornerstone of our own business model, to the end of prioritising the most critical bugs first, as our head of triage discussed in an interview about YesWeHack’s in-house triage service).
'Not pleased at all'
“I’ve been watching the litigious world that we’re now in, and I’m not pleased at all,” said Kevin Winter, global CISO at Deloitte, in another illuminating interview. “I didn’t think it would go this direction,” he said, in relation to newish SEC disclosure rules, which have introduced mandatory reporting of “material” cybersecurity incidents, and exposed CISOs to the risk of criminal liability for making the wrong calls over security incidents. ️⚖️ Winter featured alongside Richard Marcus, CISO at AuditBoard, in the latest instalment of SecurityWeek’s CISO Conversations series.
CISOs will note with interest that any UK-based employees using iCloud will see their security protections reduced after Apple turned off end-to-end encryption for UK-based users on Friday. As reported by The Record, Apple said it disabled the feature, called Advanced Data Protection (ADP), in response to British government requests for a backdoor, to assist with data retrieval related to criminal investigations. An Apple spokesperson said: “As we have said many times before, we have never built a backdoor or master key to any of our products or services and we never will.” A former British government minister has downplayed the security risks of users losing access to ADP. In contrast, Dr Joseph Lorenzo Hall, distinguished technologist at the Internet Society, was “saddened” by the news. 🍎
Events ahoy!
With spring on the horizon, our first conferences of the year are hoving into view. First up for the YesWeHack team is Next IT Security in Stockholm on 13 March. Pitched by the organisers as “the most exclusive cybersecurity event in the world”, Next IT Security is a forum for meetingC-level cybersecurity decision-makers for a “day of expert-led discussions” about the most pressing cybersecurity challenges of the moment. Our very own Sam Lowe and Jan Nieminen will be in attendance to discuss the benefits of crowdsourced security, including during a five-minute ‘Fire Starter’ session at 10:50am.
Soon after we’re heading for InCyber Forum Europe (1-3 April; Lille, France) to demo the YesWeHack platform and field questions about Bug Bounty and vulnerability management, as well as to hand out some swag. If you want to find out more about how to strengthen your security posture cost-effectively, you can find us on booth D1. The organisers are expecting 20,000 visitors from 103 countries for the 17th edition, whose theme is ‘zero trust’.
Read this monthly roundup even sooner by subscribing to CrowdSecWisdom – our LinkedIn newsletter curating news, insights and inspiration around offensive security topics like Bug Bounty, vulnerability disclosure and management, pentest management and attack surface protection.
Are you a bug hunter or do you have an interest in ethical hacking? Check out our ethical hacking-focused sister newsletter, Bug Bounty Bulletin – offering hunting advice, interviews with hunters and CTF-style challenges, among other things.