本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。
建置 Web 爬蟲程式
如 架構節所述,應用程式會分批執行,每個公司各執行一個。
擷取和處理 robots.txt 檔案
準備資料集之後,您需要確認網域是否具有 robots.txt
擷取和處理 robots.txt 檔案
-
如果您尚未這麼做,請在終端機中執行下列命令來安裝
requests
程式庫:pip install requests
-
執行以下指令碼。此指令碼會執行下列操作:
-
它定義了一個將網域做為輸入的
check_robots_txt
函數。 -
它會建構 robots.txt 檔案的完整 URL。
-
它會將 GET 請求傳送至 robots.txt 檔案的 URL。
-
如果請求成功 (狀態碼 200),則存在 robots.txt 檔案。
-
如果請求失敗或傳回不同的狀態碼,則 robots.txt 檔案不存在或無法存取。
import requests from urllib.parse import urljoin def check_robots_txt(domain): # Ensure the domain starts with a protocol if not domain.startswith(('http://', 'https://')): domain = 'https://' + domain # Construct the full URL for robots.txt robots_url = urljoin(domain, '/robots.txt') try: # Send a GET request to the robots.txt URL response = requests.get(robots_url, timeout=5) # Check if the request was successful (status code 200) if response.status_code == 200: print(f"robots.txt found at {robots_url}") return True else: print(f"No robots.txt found at {robots_url} (Status code: {response.status_code})") return False except requests.RequestException as e: print(f"Error checking {robots_url}: {e}") return False
注意
此指令碼會處理網路錯誤或其他問題的例外狀況。
-
-
如果存在 robots.txt 檔案,請使用下列指令碼來下載它:
import requests def download(self, url): response = requests.get(url, headers=self.headers, timeout=5) response.raise_for_status() # Raise an exception for non-2xx responses return response.text def download_robots_txt(self): # Append '/robots.txt' to the URL to get the robots.txt file's URL robots_url = self.url.rstrip('/') + '/robots.txt' try: response = download(robots_url) return response except requests.exceptions.RequestException as e: print(f"Error downloading robots.txt: {e}, \nGenerating sitemap using combinations...") return e
注意
您可以根據您的使用案例自訂或修改這些指令碼。您也可以合併這些指令碼。
擷取和處理網站地圖
接下來,您需要處理網站地圖
擷取和處理網站地圖
-
執行以下指令碼。此指令碼會定義以下
check_and_download_sitemap
函數:-
接受基本 URL、來自 robots.txt 的選用網站地圖 URL,以及使用者代理程式字串。
-
檢查多個潛在的網站地圖位置,包括 robots.txt 中的位置 (如果提供)。
-
嘗試從每個位置下載網站地圖。
-
驗證下載的內容是否為 XML 格式。
-
呼叫
parse_sitemap
函數以擷取 URLs。此函數:-
剖析網站地圖的 XML 內容。
-
同時處理一般網站地圖和網站地圖索引檔案。
-
如果遇到網站地圖索引,遞迴擷取子網站地圖。
-
import requests from urllib.parse import urljoin import xml.etree.ElementTree as ET def check_and_download_sitemap(base_url, robots_sitemap_url=None, user_agent='SitemapBot/1.0'): headers = {'User-Agent': user_agent} sitemap_locations = [robots_sitemap_url, urljoin(base_url, '/sitemap.xml'), urljoin(base_url, '/sitemap_index.xml'), urljoin(base_url, '/sitemap/'), urljoin(base_url, '/sitemap/sitemap.xml')] for sitemap_url in sitemap_locations: if not sitemap_url: continue print(f"Checking for sitemap at: {sitemap_url}") try: response = requests.get(sitemap_url, headers=headers, timeout=10) if response.status_code == 200: content_type = response.headers.get('Content-Type', '') if 'xml' in content_type: print(f"Successfully downloaded sitemap from {sitemap_url}") return parse_sitemap(response.text) else: print(f"Found content at {sitemap_url}, but it's not XML. Content-Type: {content_type}") except requests.RequestException as e: print(f"Error downloading sitemap from {sitemap_url}: {e}") print("No sitemap found.") return [] def parse_sitemap(sitemap_content): urls = [] try: root = ET.fromstring(sitemap_content) # Handle both sitemap and sitemapindex for loc in root.findall('.//{http://www.sitemaps.org/schemas/sitemap/0.9}loc'): urls.append(loc.text) # If it's a sitemap index, recursively fetch each sitemap if root.tag.endswith('sitemapindex'): all_urls = [] for url in urls: print(f"Fetching sub-sitemap: {url}") sub_sitemap_urls = check_and_download_sitemap(url) all_urls.extend(sub_sitemap_urls) return all_urls except ET.ParseError as e: print(f"Error parsing sitemap XML: {e}") return urls if __name__ == "__main__": base_url = input("Enter the base URL of the website: ") robots_sitemap_url = input("Enter the sitemap URL from robots.txt (or press Enter if none): ").strip() or None urls = check_and_download_sitemap(base_url, robots_sitemap_url) print(f"Found {len(urls)} URLs in sitemap:") for url in urls[:5]: # Print first 5 URLs as an example print(url) if len(urls) > 5: print("...")
-
設計爬蟲程式
接著,您設計 Web 爬蟲程式。爬蟲程式旨在遵循本指南道德網路爬蟲程式的最佳實務中所述的最佳實務。此EthicalCrawler
類別展示了道德爬蟲的幾個關鍵原則:
-
擷取和剖析 robots.txt 檔案 – 爬蟲程式會擷取目標網站的 robots.txt 檔案。
-
尊重爬蟲許可 – 爬蟲程式在爬蟲任何 URL 之前,會檢查 robots.txt 檔案中的規則是否允許該 URL 的爬蟲。如果不允許 URL,爬蟲程式會略過該 URL 並移至下一個 URL。
-
尊重爬蟲程式延遲 – 爬蟲程式會檢查 robots.txt 檔案中是否有爬蟲程式延遲指令。如果指定一個,爬蟲程式會在請求之間使用此延遲。否則,它會使用預設延遲。
-
使用者代理程式識別 – 爬蟲程式使用自訂使用者代理程式字串來識別網站本身。如有需要,網站擁有者可以設定特定規則來限制或允許您的爬蟲程式。
-
錯誤處理和正常降級 – 如果無法擷取或剖析 robots.txt 檔案,爬蟲程式會繼續進行保守的預設規則。它處理網路錯誤和非 200 HTTP 回應。
-
有限爬取 – 為了避免造成伺服器負擔過重,可以爬取的頁面數量有限制。
下列指令碼是虛擬程式碼,說明 Web 爬蟲程式的運作方式:
import requests from urllib.parse import urljoin, urlparse import time class EthicalCrawler: def __init__(self, start_url, user_agent='EthicalBot/1.0'): self.start_url = start_url self.user_agent = user_agent self.domain = urlparse(start_url).netloc self.robots_parser = None self.crawl_delay = 1 # Default delay in seconds def can_fetch(self, url): if self.robots_parser: return self.robots_parser.allowed(url, self.user_agent) return True # If no robots.txt, assume allowed but crawl conservatively def get_crawl_delay(self): if self.robots_parser: delay = self.robots_parser.agent(self.user_agent).delay if delay is not None: self.crawl_delay = delay print(f"Using crawl delay of {self.crawl_delay} seconds") def crawl(self, max_pages=10): self.get_crawl_delay() pages_crawled = 0 urls_to_crawl = [self.start_url] while urls_to_crawl and pages_crawled < max_pages: url = urls_to_crawl.pop(0) if not self.can_fetch(url): print(f"robots.txt disallows crawling: {url}") continue try: response = requests.get(url, headers={'User-Agent': self.user_agent}) if response.status_code == 200: print(f"Successfully crawled: {url}") # Here you would typically parse the content, extract links, etc. # For this example, we'll just increment the counter pages_crawled += 1 else: print(f"Failed to crawl {url}: HTTP {response.status_code}") except Exception as e: print(f"Error crawling {url}: {e}") # Respect the crawl delay time.sleep(self.crawl_delay) print(f"Crawling complete. Crawled {pages_crawled} pages.")
建置進階且符合道德的 Web 爬蟲程式,以收集 ESG 資料
-
針對此系統中使用的進階道德網路爬蟲程式,複製下列程式碼範例:
import requests from urllib.parse import urljoin, urlparse import time from collections import deque import random from bs4 import BeautifulSoup import re import csv import os class EnhancedESGCrawler: def __init__(self, start_url): self.start_url = start_url self.domain = urlparse(start_url).netloc self.desktop_user_agent = 'ESGEthicalBot/1.0' self.mobile_user_agent = 'Mozilla/5.0 (iPhone; CPU iPhone OS 14_0 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Mobile/15E148 Safari/604.1' self.robots_parser = None self.crawl_delay = None self.urls_to_crawl = deque() self.crawled_urls = set() self.max_retries = 2 self.session = requests.Session() self.esg_data = [] self.news_links = [] self.pdf_links = [] def setup(self): self.fetch_robots_txt() # Provided in Previous Snippet self.fetch_sitemap() # Provided in Previous Snippet def can_fetch(self, url, user_agent): if self.robots_parser: return self.robots_parser.allowed(url, user_agent) return True def delay(self): if self.crawl_delay is not None: time.sleep(self.crawl_delay) else: time.sleep(random.uniform(1, 3)) def get_headers(self, user_agent): return {'User-Agent': user_agent, 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', 'Accept-Encoding': 'gzip, deflate, br', 'DNT': '1', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1'} def extract_esg_data(self, url, html_content): soup = BeautifulSoup(html_content, 'html.parser') esg_data = { 'url': url, 'environmental': self.extract_environmental_data(soup), 'social': self.extract_social_data(soup), 'governance': self.extract_governance_data(soup) } self.esg_data.append(esg_data) # Extract news links and PDFs self.extract_news_links(soup, url) self.extract_pdf_links(soup, url) def extract_environmental_data(self, soup): keywords = ['carbon footprint', 'emissions', 'renewable energy', 'waste management', 'climate change'] return self.extract_keyword_data(soup, keywords) def extract_social_data(self, soup): keywords = ['diversity', 'inclusion', 'human rights', 'labor practices', 'community engagement'] return self.extract_keyword_data(soup, keywords) def extract_governance_data(self, soup): keywords = ['board structure', 'executive compensation', 'shareholder rights', 'ethics', 'transparency'] return self.extract_keyword_data(soup, keywords) def extract_keyword_data(self, soup, keywords): text = soup.get_text().lower() return {keyword: len(re.findall(r'\b' + re.escape(keyword) + r'\b', text)) for keyword in keywords} def extract_news_links(self, soup, base_url): news_keywords = ['news', 'press release', 'article', 'blog', 'sustainability'] for a in soup.find_all('a', href=True): if any(keyword in a.text.lower() for keyword in news_keywords): full_url = urljoin(base_url, a['href']) if full_url not in self.news_links: self.news_links.append({'url': full_url, 'text': a.text.strip()}) def extract_pdf_links(self, soup, base_url): for a in soup.find_all('a', href=True): if a['href'].lower().endswith('.pdf'): full_url = urljoin(base_url, a['href']) if full_url not in self.pdf_links: self.pdf_links.append({'url': full_url, 'text': a.text.strip()}) def is_relevant_to_sustainable_finance(self, text): keywords = ['sustainable finance', 'esg', 'green bond', 'social impact', 'environmental impact', 'climate risk', 'sustainability report', 'corporate responsibility'] return any(keyword in text.lower() for keyword in keywords) def attempt_crawl(self, url, user_agent): for _ in range(self.max_retries): try: response = self.session.get(url, headers=self.get_headers(user_agent), timeout=10) if response.status_code == 200: print(f"Successfully crawled: {url}") if response.headers.get('Content-Type', '').startswith('text/html'): self.extract_esg_data(url, response.text) elif response.headers.get('Content-Type', '').startswith('application/pdf'): self.save_pdf(url, response.content) return True else: print(f"Failed to crawl {url}: HTTP {response.status_code}") except requests.RequestException as e: print(f"Error crawling {url} with {user_agent}: {e}") self.delay() return False def crawl_url(self, url): if not self.can_fetch(url, self.desktop_user_agent): print(f"Robots.txt disallows desktop user agent: {url}") if self.can_fetch(url, self.mobile_user_agent): print(f"Attempting with mobile user agent: {url}") return self.attempt_crawl(url, self.mobile_user_agent) else: print(f"Robots.txt disallows both user agents: {url}") return False return self.attempt_crawl(url, self.desktop_user_agent) def crawl(self, max_pages=100): self.setup() if not self.urls_to_crawl: self.urls_to_crawl.append(self.start_url) pages_crawled = 0 while self.urls_to_crawl and pages_crawled < max_pages: url = self.urls_to_crawl.popleft() if url not in self.crawled_urls: if self.crawl_url(url): pages_crawled += 1 self.crawled_urls.add(url) self.delay() print(f"Crawling complete. Successfully crawled {pages_crawled} pages.") self.save_esg_data() self.save_news_links() self.save_pdf_links() def save_esg_data(self): with open('esg_data.csv', 'w', newline='', encoding='utf-8') as file: writer = csv.DictWriter(file, fieldnames=['url', 'environmental', 'social', 'governance']) writer.writeheader() for data in self.esg_data: writer.writerow({ 'url': data['url'], 'environmental': ', '.join([f"{k}: {v}" for k, v in data['environmental'].items()]), 'social': ', '.join([f"{k}: {v}" for k, v in data['social'].items()]), 'governance': ', '.join([f"{k}: {v}" for k, v in data['governance'].items()]) }) print("ESG data saved to esg_data.csv") def save_news_links(self): with open('news_links.csv', 'w', newline='', encoding='utf-8') as file: writer = csv.DictWriter(file, fieldnames=['url', 'text', 'relevant']) writer.writeheader() for news in self.news_links: writer.writerow({ 'url': news['url'], 'text': news['text'], 'relevant': self.is_relevant_to_sustainable_finance(news['text']) }) print("News links saved to news_links.csv") def save_pdf_links(self): # Code for saving PDF in S3 or filesystem def save_pdf(self, url, content): # Code for saving PDF in S3 or filesystem # Example usage if __name__ == "__main__": start_url = input("Enter the starting URL to crawl for ESG data and news: ") crawler = EnhancedESGCrawler(start_url) crawler.crawl(max_pages=50)
-
設定各種屬性,包括使用者代理程式、URLs的空集合,以及資料儲存清單。
-
調整
is_relevant_to_sustainable_finance()
方法中的關鍵字和相關性條件,以符合您的特定需求。 -
請確定 robots.txt 檔案允許網路爬取網站,而且您使用 robots.txt 檔案中指定的網路爬取延遲和使用者代理程式。
-
根據您的組織需要,考慮對提供的 Web 爬蟲程式指令碼進行下列自訂:
-
實作
fetch_sitemap()
方法來更有效率地探索 URL。 -
增強生產用途的錯誤記錄和處理。
-
實作更複雜的內容相關性分析。
-
新增深度和廣度控制項以限制爬取範圍。
-