Extract structured data from web pages using natural language prompts via Heurist's FirecrawlSearchDigestAgent. Supports single URLs, multiple pages, and wildcard domain crawling to gather and digest web content based on your specific extraction requirements. | Input: Accepts a list of URLs (with optional wildcard patterns for domain crawling) and a natural language description of what data to extract from those pages | Output: Returns extracted structured data from the specified URLs based on your extraction prompt, or an error message if URLs are invalid or extraction fails | Use cases: I want to extract product information from multiple e-commerce pages; Extract all pricing data from a competitor's website using wildcards; Get structured data from news articles across a domain; Scrape and digest content from multiple URLs based on custom extraction rules
This action will execute the specified operation when your workflow runs.
This integration requires a connector to be configured before it can be used in workflows.
Debug mode flag. ALWAYS use false.
falseNatural language description of what data to extract from the pages.
List of URLs to extract data from. Can include wildcards (e.g., 'example.com/*') to crawl entire domains.
Payment details from x402 facilitator
Fields
amountPaidAmount paid in smallest token unitassetToken address used for paymentnetworkNetwork where payment was made (e.g., 'base')payerPayer wallet addresstransactionTransaction hash (may be null if server doesn't return x-payment-response header)The endpoint response data. Access fields with {{nodeId.result.resource.fieldName}}
Whether the request succeeded
HTTP status code