{"id":1892,"date":"2025-06-03T09:07:29","date_gmt":"2025-06-03T09:07:29","guid":{"rendered":"https:\/\/violethoward.com\/new\/how-sp-is-using-deep-web-scraping-ensemble-learning-and-snowflake-architecture-to-collect-5x-more-data-on-smes\/"},"modified":"2025-06-03T09:07:29","modified_gmt":"2025-06-03T09:07:29","slug":"how-sp-is-using-deep-web-scraping-ensemble-learning-and-snowflake-architecture-to-collect-5x-more-data-on-smes","status":"publish","type":"post","link":"https:\/\/violethoward.com\/new\/how-sp-is-using-deep-web-scraping-ensemble-learning-and-snowflake-architecture-to-collect-5x-more-data-on-smes\/","title":{"rendered":"How S&P is using deep web scraping, ensemble learning and Snowflake architecture to collect 5X more data on SMEs"},"content":{"rendered":" \r\n
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More<\/em><\/p>\n\n\n\n The investing world has a significant problem when it comes to data about small and medium-sized enterprises (SMEs). This has nothing to do with data quality or accuracy \u2014 it\u2019s the lack of any data at all.\u00a0<\/p>\n\n\n\n Assessing SME creditworthiness has been notoriously challenging because small enterprise financial data is not public, and therefore very difficult to access.<\/p>\n\n\n\n S&P Global Market Intelligence, a division of S&P Global and a foremost provider of credit ratings and benchmarks, claims to have solved this longstanding problem. The company\u2019s technical team built RiskGauge, an AI-powered platform that crawls otherwise elusive data from over 200 million websites, processes it through numerous algorithms and generates risk scores.\u00a0<\/p>\n\n\n\n Built on Snowflake architecture, the platform has increased S&P\u2019s coverage of SMEs by 5X.\u00a0<\/p>\n\n\n\n \u201cOur objective was expansion and efficiency,\u201d explained Moody Hadi, S&P Global\u2019s head of risk solutions\u2019 new product development. \u201cThe project has improved the accuracy and coverage of the data, benefiting clients.\u201d\u00a0<\/p>\n\n\n\n Counterparty credit management essentially assesses a company\u2019s creditworthiness and risk based on several factors, including financials, probability of default and risk appetite. S&P Global Market Intelligence provides these insights to institutional investors, banks, insurance companies, wealth managers and others.\u00a0<\/p>\n\n\n\n \u201cLarge and financial corporate entities lend to suppliers, but they need to know how much to lend, how frequently to monitor them, what the duration of the loan would be,\u201d Hadi explained. \u201cThey rely on third parties to come up with a trustworthy credit score.\u201d\u00a0<\/p>\n\n\n\n But there has long been a gap in SME coverage. Hadi pointed out that, while large public companies like IBM, Microsoft, Amazon, Google and the rest are required to disclose their quarterly financials, SMEs don\u2019t have that obligation, thus limiting financial transparency. From an investor perspective, consider that there are about 10 million SMEs in the U.S., compared to roughly 60,000 public companies.\u00a0<\/p>\n\n\n\n S&P Global Market Intelligence claims it now has all of those covered: Previously, the firm only had data on about 2 million, but RiskGauge expanded that to 10 million. \u00a0<\/p>\n\n\n\n The platform, which went into production in January, is based on a system built by Hadi\u2019s team that pulls firmographic data from unstructured web content, combines it with anonymized third-party datasets, and applies machine learning (ML) and advanced algorithms to generate credit scores.\u00a0<\/p>\n\n\n\n The company uses Snowflake to mine company pages and process them into firmographics drivers (market segmenters) that are then fed into RiskGauge.\u00a0<\/p>\n\n\n\n The platform\u2019s data pipeline consists of:<\/p>\n\n\n\n Specifically, Hadi\u2019s team uses Snowflake\u2019s data warehouse and Snowpark Container Services in the middle of the pre-processing, mining and curation steps.\u00a0<\/p>\n\n\n\n At the end of this process, SMEs are scored based on a combination of financial, business and market risk; 1 being the highest, 100 the lowest. Investors also receive reports on RiskGauge detailing financials, firmographics, business credit reports, historical performance and key developments. They can also compare companies to their peers.\u00a0<\/p>\n\n\n\n Hadi explained that RiskGauge employs a multi-layer scraping process that pulls various details from a company\u2019s web domain, such as basic \u2018contact us\u2019 and landing pages and news-related information. The miners go down several URL layers to scrape relevant data.\u00a0<\/p>\n\n\n\n \u201cAs you can imagine, a person can\u2019t do this,\u201d said Hadi. \u201cIt is going to be very time-consuming for a human, especially when you\u2019re dealing with 200 million web pages.\u201d Which, he noted, results in several terabytes of website information.\u00a0<\/p>\n\n\n\n After data is collected, the next step is to run algorithms that remove anything that isn\u2019t text; Hadi noted that the system is not interested in JavaScript or even HTML tags. Data is cleaned so it becomes human-readable, not code. Then, it\u2019s loaded into Snowflake and several data miners are run against the pages.<\/p>\n\n\n\n Ensemble algorithms are critical to the prediction process; these types of algorithms combine predictions from several individual models (base models or \u2018weak learners\u2019 that are essentially a little better than random guessing) to validate company information such as name, business description, sector, location, and operational activity. The system also factors in any polarity in sentiment around announcements disclosed on the site.\u00a0<\/p>\n\n\n\n \u201cAfter we crawl a site, the algorithms hit different components of the pages pulled, and they vote and come back with a recommendation,\u201d Hadi explained. \u201cThere is no human in the loop in this process, the algorithms are basically competing with each other. That helps with the efficiency to increase our coverage.\u201d\u00a0<\/p>\n\n\n\n Following that initial load, the system monitors site activity, automatically running weekly scans. It doesn\u2019t update information weekly; only when it detects a change, Hadi added. When performing subsequent scans, a hash key tracks the landing page from the previous crawl, and the system generates another key; if they are identical, no changes were made, and no action is required. However, if the hash keys don\u2019t match, the system will be triggered to update company information.\u00a0<\/p>\n\n\n\n This continuous scraping is important to ensure the system remains as up-to-date as possible. \u201cIf they\u2019re updating the site often, that tells us they\u2019re alive, right?,\u201d Hadi noted.\u00a0<\/p>\n\n\n\n There were challenges to overcome when building out the system, of course, particularly due to the sheer size of datasets and the need for quick processing. Hadi\u2019s team had to make trade-offs to balance accuracy and speed.\u00a0<\/p>\n\n\n\n \u201cWe kept optimizing different algorithms to run faster,\u201d he explained. \u201cAnd tweaking; some algorithms we had were really good, had high accuracy, high precision, high recall, but they were computationally too costly.\u201d\u00a0<\/p>\n\n\n\n Websites do not always conform to standard formats, requiring flexible scraping methods.<\/p>\n\n\n\n \u201cYou hear a lot about designing websites with an exercise like this, because when we originally started, we thought, \u2018Hey, every website should conform to a sitemap or XML,\u2019\u201d said Hadi. \u201cAnd guess what? Nobody follows that.\u201d<\/p>\n\n\n\n They didn\u2019t want to hard code or incorporate robotic process automation (RPA) into the system because sites vary so widely, Hadi said, and they knew the most important information they needed was in the text. This led to the creation of a system that only pulls necessary components of a site, then cleanses it for the actual text and discards code and any JavaScript or TypeScript.<\/p>\n\n\n\n As Hadi noted, \u201cthe biggest challenges were around performance and tuning and the fact that websites by design are not clean.\u201d\u00a0
\n<\/div>RiskGauge\u2019s underlying architecture<\/h2>\n\n\n\n
\n
How S&P is collecting valuable company data<\/h2>\n\n\n\n
Challenges with processing speed, giant datasets, unclean websites<\/h2>\n\n\n\n
<\/p>\n