本文属于机器翻译版本。若本译文内容与英语原文存在差异,则一律以英文原文为准。
构建 Web 爬虫
如架构本节所述,该应用程序是分批运行的,每家公司一个。
捕获和处理 robots.txt 文件
准备好数据集后,您需要确认该域是否有 robots.txt
捕获和处理 robots.txt 文件
-
如果您还没有这样做,请在终端中运行以下命令来安装该
requests
库:pip install requests
-
请运行以下脚本。该脚本执行以下操作:
-
它定义了一个将域作为输入的
check_robots_txt
函数。 -
它构建 robots.txt 文件的完整 URL。
-
它向 robots.txt 文件的 URL 发送 GET 请求。
-
如果请求成功(状态代码 200),则存在一个 robots.txt 文件。
-
如果请求失败或返回不同的状态码,则 robots.txt 文件不存在或无法访问。
import requests from urllib.parse import urljoin def check_robots_txt(domain): # Ensure the domain starts with a protocol if not domain.startswith(('http://', 'https://')): domain = 'https://' + domain # Construct the full URL for robots.txt robots_url = urljoin(domain, '/robots.txt') try: # Send a GET request to the robots.txt URL response = requests.get(robots_url, timeout=5) # Check if the request was successful (status code 200) if response.status_code == 200: print(f"robots.txt found at {robots_url}") return True else: print(f"No robots.txt found at {robots_url} (Status code: {response.status_code})") return False except requests.RequestException as e: print(f"Error checking {robots_url}: {e}") return False
注意
此脚本处理网络错误或其他问题的异常。
-
-
如果 robots.txt 文件存在,请使用以下脚本将其下载:
import requests def download(self, url): response = requests.get(url, headers=self.headers, timeout=5) response.raise_for_status() # Raise an exception for non-2xx responses return response.text def download_robots_txt(self): # Append '/robots.txt' to the URL to get the robots.txt file's URL robots_url = self.url.rstrip('/') + '/robots.txt' try: response = download(robots_url) return response except requests.exceptions.RequestException as e: print(f"Error downloading robots.txt: {e}, \nGenerating sitemap using combinations...") return e
注意
这些脚本可以根据您的用例进行自定义或修改。您也可以组合这些脚本。
捕获和处理站点地图
接下来,您需要处理站点地图
捕获和处理站点地图
-
请运行以下脚本。此脚本定义了一个
check_and_download_sitemap
函数,该函数:-
接受基本 URL、来自 robots.txt 的可选站点地图 URL 和用户代理字符串。
-
检查多个潜在的站点地图位置,包括来自 robots.txt 的站点地图位置(如果提供)。
-
尝试从每个位置下载站点地图。
-
验证下载的内容是否为 XML 格式。
-
调用该
parse_sitemap
函数以提取 URLs。这个函数:-
解析站点地图的 XML 内容。
-
同时处理常规站点地图和站点地图索引文件。
-
如果遇到站点地图索引,则以递归方式获取子站点地图。
-
import requests from urllib.parse import urljoin import xml.etree.ElementTree as ET def check_and_download_sitemap(base_url, robots_sitemap_url=None, user_agent='SitemapBot/1.0'): headers = {'User-Agent': user_agent} sitemap_locations = [robots_sitemap_url, urljoin(base_url, '/sitemap.xml'), urljoin(base_url, '/sitemap_index.xml'), urljoin(base_url, '/sitemap/'), urljoin(base_url, '/sitemap/sitemap.xml')] for sitemap_url in sitemap_locations: if not sitemap_url: continue print(f"Checking for sitemap at: {sitemap_url}") try: response = requests.get(sitemap_url, headers=headers, timeout=10) if response.status_code == 200: content_type = response.headers.get('Content-Type', '') if 'xml' in content_type: print(f"Successfully downloaded sitemap from {sitemap_url}") return parse_sitemap(response.text) else: print(f"Found content at {sitemap_url}, but it's not XML. Content-Type: {content_type}") except requests.RequestException as e: print(f"Error downloading sitemap from {sitemap_url}: {e}") print("No sitemap found.") return [] def parse_sitemap(sitemap_content): urls = [] try: root = ET.fromstring(sitemap_content) # Handle both sitemap and sitemapindex for loc in root.findall('.//{http://www.sitemaps.org/schemas/sitemap/0.9}loc'): urls.append(loc.text) # If it's a sitemap index, recursively fetch each sitemap if root.tag.endswith('sitemapindex'): all_urls = [] for url in urls: print(f"Fetching sub-sitemap: {url}") sub_sitemap_urls = check_and_download_sitemap(url) all_urls.extend(sub_sitemap_urls) return all_urls except ET.ParseError as e: print(f"Error parsing sitemap XML: {e}") return urls if __name__ == "__main__": base_url = input("Enter the base URL of the website: ") robots_sitemap_url = input("Enter the sitemap URL from robots.txt (or press Enter if none): ").strip() or None urls = check_and_download_sitemap(base_url, robots_sitemap_url) print(f"Found {len(urls)} URLs in sitemap:") for url in urls[:5]: # Print first 5 URLs as an example print(url) if len(urls) > 5: print("...")
-
设计爬行器
接下来,您将设计网络爬虫。该爬虫旨在遵循本指南合乎道德的网络爬虫的最佳实践中描述的最佳实践。本EthicalCrawler
课程演示了道德爬行的几个关键原则:
-
获取和解析 robots.txt 文件 — 抓取程序获取目标网站的 robots.txt 文件。
-
尊重抓取权限-在抓取任何 URL 之前,爬网程序会检查 robots.txt 文件中的规则是否允许抓取该 URL。如果某个 URL 被禁用,则抓取工具会跳过该网址并移至下一个网址。
-
尊重抓取延迟 — 爬虫会检查 robots.txt 文件中是否有抓取延迟指令。如果指定了延迟,Crawler 将在两次请求之间使用此延迟。否则,它将使用默认延迟。
-
用户代理识别-爬虫使用自定义的用户代理字符串向网站标识自己。如果需要,网站所有者可以设置特定的规则来限制或允许您的抓取工具。
-
错误处理和优雅降级 — 如果无法获取或解析 robots.txt 文件,则爬虫会继续使用保守的默认规则。它可以处理网络错误和非 200 个 HTTP 响应。
-
有限抓取-为了避免服务器不堪重负,可以抓取的页面数量有限制。
以下脚本是解释网络爬虫工作原理的伪代码:
import requests from urllib.parse import urljoin, urlparse import time class EthicalCrawler: def __init__(self, start_url, user_agent='EthicalBot/1.0'): self.start_url = start_url self.user_agent = user_agent self.domain = urlparse(start_url).netloc self.robots_parser = None self.crawl_delay = 1 # Default delay in seconds def can_fetch(self, url): if self.robots_parser: return self.robots_parser.allowed(url, self.user_agent) return True # If no robots.txt, assume allowed but crawl conservatively def get_crawl_delay(self): if self.robots_parser: delay = self.robots_parser.agent(self.user_agent).delay if delay is not None: self.crawl_delay = delay print(f"Using crawl delay of {self.crawl_delay} seconds") def crawl(self, max_pages=10): self.get_crawl_delay() pages_crawled = 0 urls_to_crawl = [self.start_url] while urls_to_crawl and pages_crawled < max_pages: url = urls_to_crawl.pop(0) if not self.can_fetch(url): print(f"robots.txt disallows crawling: {url}") continue try: response = requests.get(url, headers={'User-Agent': self.user_agent}) if response.status_code == 200: print(f"Successfully crawled: {url}") # Here you would typically parse the content, extract links, etc. # For this example, we'll just increment the counter pages_crawled += 1 else: print(f"Failed to crawl {url}: HTTP {response.status_code}") except Exception as e: print(f"Error crawling {url}: {e}") # Respect the crawl delay time.sleep(self.crawl_delay) print(f"Crawling complete. Crawled {pages_crawled} pages.")
构建一款收集 ESG 数据的先进、合乎道德的网络爬虫
-
复制此系统中使用的高级道德网络爬虫的以下代码示例:
import requests from urllib.parse import urljoin, urlparse import time from collections import deque import random from bs4 import BeautifulSoup import re import csv import os class EnhancedESGCrawler: def __init__(self, start_url): self.start_url = start_url self.domain = urlparse(start_url).netloc self.desktop_user_agent = 'ESGEthicalBot/1.0' self.mobile_user_agent = 'Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1' self.robots_parser = None self.crawl_delay = None self.urls_to_crawl = deque() self.crawled_urls = set() self.max_retries = 2 self.session = requests.Session() self.esg_data = [] self.news_links = [] self.pdf_links = [] def setup(self): self.fetch_robots_txt() # Provided in Previous Snippet self.fetch_sitemap() # Provided in Previous Snippet def can_fetch(self, url, user_agent): if self.robots_parser: return self.robots_parser.allowed(url, user_agent) return True def delay(self): if self.crawl_delay is not None: time.sleep(self.crawl_delay) else: time.sleep(random.uniform(1, 3)) def get_headers(self, user_agent): return {'User-Agent': user_agent, 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br', 'DNT': '1', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1'} def extract_esg_data(self, url, html_content): soup = BeautifulSoup(html_content, 'html.parser') esg_data = { 'url': url, 'environmental': self.extract_environmental_data(soup), 'social': self.extract_social_data(soup), 'governance': self.extract_governance_data(soup) } self.esg_data.append(esg_data) # Extract news links and PDFs self.extract_news_links(soup, url) self.extract_pdf_links(soup, url) def extract_environmental_data(self, soup): keywords = ['carbon footprint', 'emissions', 'renewable energy', 'waste management', 'climate change'] return self.extract_keyword_data(soup, keywords) def extract_social_data(self, soup): keywords = ['diversity', 'inclusion', 'human rights', 'labor practices', 'community engagement'] return self.extract_keyword_data(soup, keywords) def extract_governance_data(self, soup): keywords = ['board structure', 'executive compensation', 'shareholder rights', 'ethics', 'transparency'] return self.extract_keyword_data(soup, keywords) def extract_keyword_data(self, soup, keywords): text = soup.get_text().lower() return {keyword: len(re.findall(r'\b' + re.escape(keyword) + r'\b', text)) for keyword in keywords} def extract_news_links(self, soup, base_url): news_keywords = ['news', 'press release', 'article', 'blog', 'sustainability'] for a in soup.find_all('a', href=True): if any(keyword in a.text.lower() for keyword in news_keywords): full_url = urljoin(base_url, a['href']) if full_url not in self.news_links: self.news_links.append({'url': full_url, 'text': a.text.strip()}) def extract_pdf_links(self, soup, base_url): for a in soup.find_all('a', href=True): if a['href'].lower().endswith('.pdf'): full_url = urljoin(base_url, a['href']) if full_url not in self.pdf_links: self.pdf_links.append({'url': full_url, 'text': a.text.strip()}) def is_relevant_to_sustainable_finance(self, text): keywords = ['sustainable finance', 'esg', 'green bond', 'social impact', 'environmental impact', 'climate risk', 'sustainability report', 'corporate responsibility'] return any(keyword in text.lower() for keyword in keywords) def attempt_crawl(self, url, user_agent): for _ in range(self.max_retries): try: response = self.session.get(url, headers=self.get_headers(user_agent), timeout=10) if response.status_code == 200: print(f"Successfully crawled: {url}") if response.headers.get('Content-Type', '').startswith('text/html'): self.extract_esg_data(url, response.text) elif response.headers.get('Content-Type', '').startswith('application/pdf'): self.save_pdf(url, response.content) return True else: print(f"Failed to crawl {url}: HTTP {response.status_code}") except requests.RequestException as e: print(f"Error crawling {url} with {user_agent}: {e}") self.delay() return False def crawl_url(self, url): if not self.can_fetch(url, self.desktop_user_agent): print(f"Robots.txt disallows desktop user agent: {url}") if self.can_fetch(url, self.mobile_user_agent): print(f"Attempting with mobile user agent: {url}") return self.attempt_crawl(url, self.mobile_user_agent) else: print(f"Robots.txt disallows both user agents: {url}") return False return self.attempt_crawl(url, self.desktop_user_agent) def crawl(self, max_pages=100): self.setup() if not self.urls_to_crawl: self.urls_to_crawl.append(self.start_url) pages_crawled = 0 while self.urls_to_crawl and pages_crawled < max_pages: url = self.urls_to_crawl.popleft() if url not in self.crawled_urls: if self.crawl_url(url): pages_crawled += 1 self.crawled_urls.add(url) self.delay() print(f"Crawling complete. Successfully crawled {pages_crawled} pages.") self.save_esg_data() self.save_news_links() self.save_pdf_links() def save_esg_data(self): with open('esg_data.csv', 'w', newline='', encoding='utf-8') as file: writer = csv.DictWriter(file, fieldnames=['url', 'environmental', 'social', 'governance']) writer.writeheader() for data in self.esg_data: writer.writerow({ 'url': data['url'], 'environmental': ', '.join([f"{k}: {v}" for k, v in data['environmental'].items()]), 'social': ', '.join([f"{k}: {v}" for k, v in data['social'].items()]), 'governance': ', '.join([f"{k}: {v}" for k, v in data['governance'].items()]) }) print("ESG data saved to esg_data.csv") def save_news_links(self): with open('news_links.csv', 'w', newline='', encoding='utf-8') as file: writer = csv.DictWriter(file, fieldnames=['url', 'text', 'relevant']) writer.writeheader() for news in self.news_links: writer.writerow({ 'url': news['url'], 'text': news['text'], 'relevant': self.is_relevant_to_sustainable_finance(news['text']) }) print("News links saved to news_links.csv") def save_pdf_links(self): # Code for saving PDF in S3 or filesystem def save_pdf(self, url, content): # Code for saving PDF in S3 or filesystem # Example usage if __name__ == "__main__": start_url = input("Enter the starting URL to crawl for ESG data and news: ") crawler = EnhancedESGCrawler(start_url) crawler.crawl(max_pages=50)
-
设置各种属性,包括用户代理、空集合 URLs和数据存储列表。
-
调整
is_relevant_to_sustainable_finance()
方法中的关键字和相关性标准,以满足您的特定需求。 -
确保 robots.txt 文件允许抓取网站,并且您使用的是 robots.txt 文件中指定的抓取延迟和用户代理。
-
根据贵组织的需要,可以考虑对提供的网络爬虫脚本进行以下自定义:
-
实现该
fetch_sitemap()
方法以提高网址发现的效率。 -
增强用于生产的错误记录和处理。
-
实施更复杂的内容相关性分析。
-
添加深度和广度控件以限制爬行范围。
-