Stream real-time responses from NanoChat, a 32-layer transformer language model built by Andrej Karpathy in 2025. Access this lightweight LLM via the NanoBrain platform for conversational AI tasks with streaming token-by-token output. | Input: Accepts an empty request body (no parameters required) to initiate a chat session | Output: Returns streaming server-sent events (SSE) with token-by-token text generation from the NanoChat d32 model, including model identification and completion markers | Use cases: I want to chat with a lightweight language model for quick responses; Get streaming AI responses for conversational tasks; I need to integrate a compact LLM into my application; Generate text responses using the NanoChat d32 model
This action will execute the specified operation when your workflow runs.
This integration requires a connector to be configured before it can be used in workflows.
This endpoint has no configurable parameters.
Payment details from x402 facilitator
Fields
amountPaidAmount paid in smallest token unitassetToken address used for paymentnetworkNetwork where payment was made (e.g., 'base')payerPayer wallet addresstransactionTransaction hash (may be null if server doesn't return x-payment-response header)The endpoint response data. Access fields with {{nodeId.result.resource.fieldName}}
Whether the request succeeded
HTTP status code