{"id":644,"date":"2025-03-15T16:13:58","date_gmt":"2025-03-15T16:13:58","guid":{"rendered":"https:\/\/violethoward.com\/new\/the-risks-of-ai-generated-code-are-real-heres-how-enterprises-can-manage-the-risk\/"},"modified":"2025-03-15T16:13:58","modified_gmt":"2025-03-15T16:13:58","slug":"the-risks-of-ai-generated-code-are-real-heres-how-enterprises-can-manage-the-risk","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/the-risks-of-ai-generated-code-are-real-heres-how-enterprises-can-manage-the-risk\/","title":{"rendered":"The risks of AI-generated code are real \u2014 here&#8217;s how enterprises can manage the risk"},"content":{"rendered":" \r\n<br><div>\n\t\t\t\t<div id=\"boilerplate_2682874\" class=\"post-boilerplate boilerplate-before\">\n<p><em>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-css-opacity is-style-wide\"\/>\n<\/div><p>Not that long ago, humans wrote almost all application code. But that\u2019s no longer the case: The use of AI tools to write code has expanded dramatically. Some experts, such as Anthropic CEO Dario Amodei, expect that AI will write 90% of all code within the next 6 months.<\/p>\n\n\n\n<p>Against that backdrop, what is the impact for enterprises? Code development practices have traditionally involved various levels of control, oversight and governance to help ensure quality, compliance and security. With AI-developed code, do organizations have the same assurances? Even more importantly, perhaps, organizations must know which models generated their AI code.<\/p>\n\n\n\n<p>Understanding where code comes from is not a new challenge for enterprises. That\u2019s where source code analysis (SCA) tools fit in. Historically, SCA tools have not provide insight into AI, but that\u2019s now changing. Multiple vendors, including Sonar, Endor Labs and Sonatype are now providing different types of insights that can help enterprises with AI-developed code.<\/p>\n\n\n\n<p>\u201cEvery customer we talk to now is interested in how they should be responsibly using AI code generators,\u201d Sonar CEO Tariq Shaukat told VentureBeat.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-financial-firm-suffers-one-outage-a-week-due-to-ai-developed-code\">Financial firm suffers one outage a week due to AI-developed code<\/h2>\n\n\n\n<p>AI tools are not infallible. Many organizations learned that lesson early on when content development tools provided inaccurate results known as hallucinations.<\/p>\n\n\n\n<p>The same basic lesson applies to AI-developed code. As organizations move from experimental mode into production mode, they have increasingly come to the realization that code is very buggy. Shaukat noted that AI-developed code can also lead to security and reliability issues. The impact is real and it\u2019s also not trivial.<\/p>\n\n\n\n<p>\u201cI had a CTO, for example, of a financial services company about six months ago tell me that they were experiencing an outage a week because of AI generated code,\u201d said Shaukat.<\/p>\n\n\n\n<p>When he asked his customer if he was doing code reviews, the answer was yes. That said, the developers didn\u2019t feel anywhere near as accountable for the code, and were not spending as much time and rigor on it, as they had previously.\u00a0<\/p>\n\n\n\n<p>The reasons code ends up being buggy, especially for large enterprises, can be variable. One particular common issue, though, is that enterprises often have large code bases that can have complex architectures that an AI tool might not know about. In Shaukat\u2019s view, AI code generators don\u2019t generally deal well with the complexity of larger and more sophisticated code bases.<\/p>\n\n\n\n<p>\u201cOur largest customer analyzes over 2 billion lines of code,\u201d said Shaukat. \u201cYou start dealing with those code bases, and they\u2019re much more complex, they have a lot more tech debt and they have a lot of dependencies.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-the-challenges-of-ai-developed-code\">The challenges of AI developed code<\/h2>\n\n\n\n<p>To Mitchell Johnson, chief product development officer at Sonatype, it is also very clear that AI-developed code is here to stay.<\/p>\n\n\n\n<p>Software developers must follow what he calls the engineering Hippocratic Oath. That is, to do no harm to the codebase. This means rigorously reviewing, understanding and validating every line of AI-generated code before committing it \u2014 just as developers would do with manually written or open-source code.\u00a0<\/p>\n\n\n\n<p>\u201cAI is a powerful tool, but it does not replace human judgment when it comes to security, governance and quality,\u201d Johnson told VentureBeat.<\/p>\n\n\n\n<p>The biggest risks of AI-generated code, according to Johnson, are:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Security risks<\/strong>: AI is trained on massive open-source datasets, often including vulnerable or malicious code. If unchecked, it can introduce security flaws into the software supply chain.<\/li>\n\n\n\n<li><strong>Blind trust<\/strong>: Developers, especially less experienced ones, may assume AI-generated code is correct and secure without proper validation, leading to unchecked vulnerabilities.<\/li>\n\n\n\n<li><strong>Compliance and context gaps<\/strong>: AI lacks awareness of business logic, security policies and legal requirements, making compliance and performance trade-offs risky.<\/li>\n\n\n\n<li><strong>Governance challenges<\/strong>: AI-generated code can sprawl without oversight. Organizations need automated guardrails to track, audit and secure AI-created code at scale.<\/li>\n<\/ul>\n\n\n\n<p>\u201cDespite these risks, speed and security don\u2019t have to be a trade-off, said Johnson. \u201cWith the right tools, automation and data-driven governance, organizations can harness AI safely \u2014 accelerating innovation while ensuring security and compliance.\u201d<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-models-matter-identifying-open-source-model-risk-for-code-development\">Models matter: Identifying open source model risk for code development<\/h2>\n\n\n\n<p>There are a variety of models organizations are using to generate code. Anthopic Claude 3.7, for example, is a particularly powerful option. Google Code Assist, OpenAI\u2019s o3 and GPT-4o models are also viable choices.<\/p>\n\n\n\n<p>Then there\u2019s open source. Vendors such as Meta and Qodo offer open-source models, and there is a seemingly endless array of options available on HuggingFace. Karl Mattson, Endor Labs CISO, warned that these models pose security challenges that many enterprises aren\u2019t prepared for.<\/p>\n\n\n\n<p>\u201cThe systematic risk is the use of open source LLMs,\u201d Mattson told VentureBeat. \u201cDevelopers using open-source models are creating a whole new suite of problems. They\u2019re introducing into their code base using sort of unvetted or unevaluated, unproven models.\u201d<\/p>\n\n\n\n<p>Unlike commercial offerings from companies like Anthropic or OpenAI, which Mattson describes as having \u201csubstantially high quality security and governance programs,\u201d open-source models from repositories like Hugging Face can vary dramatically in quality and security posture. Mattson emphasized that rather than trying to ban the use of open-source models for code generation, organizations should understand the potential risks and choose appropriately.<\/p>\n\n\n\n<p>Endor Labs can help organizations detect when open-source AI models, particularly from Hugging Face, are being used in code repositories. The company\u2019s technology also evaluates these models across 10 attributes of risk including operational security, ownership, utilization and update frequency to establish a risk baseline.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-specialized-detection-technologies-emerge\">Specialized detection technologies emerge<\/h2>\n\n\n\n<p>To deal with emerging challenges, SCA vendors have released a number of different capabilities.<\/p>\n\n\n\n<p>For instance, Sonar has developed an AI code assurance capability that can identify code patterns unique to machine generation. The system can detect when code was likely AI-generated, even without direct integration with the coding assistant. Sonar then applies specialized scrutiny to those sections, looking for hallucinated dependencies and architectural issues that wouldn\u2019t appear in human-written code.<\/p>\n\n\n\n<p>Endor Labs and Sonatype take a different technical approach, focusing on model provenance. Sonatype\u2019s platform can be used to identify, track and govern AI models alongside their software components. Endor Labs can also identify when open-source AI models are being used in code repositories and assess the potential risk.<\/p>\n\n\n\n\n\n\n\n<p>When implementing AI-generated code in enterprise environments, organizations need structured approaches to mitigate risks while maximizing benefits.\u00a0<\/p>\n\n\n\n<p>There are several key best practices that enterprises should consider, including:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Implement rigorous verification processes<\/strong>: Shaukat recommends that organizations have<strong> <\/strong>a rigorous process around understanding where code generators are used in specific part of the code base. This is necessary to ensure the right level of accountability and scrutiny of generated code.<\/li>\n\n\n\n<li><strong>Recognize AI\u2019s limitations with complex codebases<\/strong>: While AI-generated code can easily handle simple scripts, it can sometimes be somewhat limited when it comes to complex code bases that have a lot of dependencies.<\/li>\n\n\n\n<li><strong>Understand the unique issues in AI-generated code<\/strong>: Shaukat noted that<strong> w<\/strong>hile AI avoids common syntax errors, it tends to create more serious architectural problems through hallucinations. Code hallucinations can include making up a variable name or a library that doesn\u2019t actually exist.<\/li>\n\n\n\n<li><strong>Require developer accountability<\/strong>: Johnson emphasizes that AI-generated code is not inherently secure. Developers must review, understand and validate every line before committing it.<\/li>\n\n\n\n<li><strong>Streamline AI approval<\/strong>: Johnson also warns of the risk of shadow AI, or uncontrolled use of AI tools. Many organizations either ban AI outright (which employees ignore) or create approval processes so complex that employees bypass them. Instead, he suggests businesses create a clear, efficient framework to evaluate and greenlight AI tools, ensuring safe adoption without unnecessary roadblocks.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-this-means-for-enterprises\">What this means for enterprises<\/h2>\n\n\n\n<p>The risk of Shadow AI code development is real.\u00a0\u00a0<\/p>\n\n\n\n<p>The volume of code that organizations can produce with AI assistance is dramatically increasing and could soon comprise the majority of all code.<\/p>\n\n\n\n<p>The stakes are particularly high for complex enterprise applications where a single hallucinated dependency can cause catastrophic failures. For organizations looking to adopt AI coding tools while maintaining reliability, implementing specialized code analysis tools is rapidly shifting from optional to essential.<\/p>\n\n\n\n<p>\u201cIf you\u2019re allowing AI-generated code in production without specialized detection and validation, you\u2019re essentially flying blind,\u201d Mattson warned. \u201cThe types of failures we\u2019re seeing aren\u2019t just bugs \u2014 they\u2019re architectural failures that can bring down entire systems.\u201d<\/p>\n<div id=\"boilerplate_2660155\" class=\"post-boilerplate boilerplate-after\"><div class=\"Boilerplate__newsletter-container vb\">\n<div class=\"Boilerplate__newsletter-main\">\n<p><strong>Daily insights on business use cases with VB Daily<\/strong><\/p>\n<p class=\"copy\">If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.<\/p>\n<p class=\"Form__newsletter-legal\">Read our Privacy Policy<\/p>\n<p class=\"Form__success\" id=\"boilerplateNewsletterConfirmation\">\n\t\t\t\t\tThanks for subscribing. Check out more VB newsletters here.\n\t\t\t\t<\/p>\n<p class=\"Form__error\">An error occured.<\/p>\n<\/p><\/div>\n<div class=\"image-container\">\n\t\t\t\t\t<img decoding=\"async\" src=\"https:\/\/venturebeat.com\/wp-content\/themes\/vb-news\/brand\/img\/vb-daily-phone.png\" alt=\"\"\/>\n\t\t\t\t<\/div>\n<\/p><\/div>\n<\/div>\t\t\t<\/div>\r\n<br>\r\n<br><a href=\"https:\/\/venturebeat.com\/ai\/the-risks-of-ai-generated-code-are-real-heres-how-enterprises-can-manage-the-risk\/\">Source link <\/a>","protected":false},"excerpt":{"rendered":"<p>Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Not that long ago, humans wrote almost all application code. But that\u2019s no longer the case: The use of AI tools to write code has expanded dramatically. Some experts, such as Anthropic CEO Dario Amodei, expect [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":645,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[33],"tags":[],"class_list":["post-644","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-automation"],"aioseo_notices":[],"jetpack_featured_media_url":"https:\/\/violethoward.com\/new\/wp-content\/uploads\/2025\/03\/ai_generated_code_SMK.jpg","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/644","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/comments?post=644"}],"version-history":[{"count":0,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/posts\/644\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media\/645"}],"wp:attachment":[{"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/media?parent=644"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/categories?post=644"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/violethoward.com\/new\/wp-json\/wp\/v2\/tags?post=644"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}<!-- This website is optimized by Airlift. Learn more: https://airlift.net. Template:. Learn more: https://airlift.net. Template: 69b0ea1f46fa5c3231e56837. Config Timestamp: 2026-03-11 04:05:51 UTC, Cached Timestamp: 2026-04-08 06:26:14 UTC -->