Access Venice Uncensored 1.1, an unrestricted AI chat model via PadelMaps that handles any topic without content filtering. Supports multiple model options (Venice Uncensored, Llama 3.3 70B) with configurable temperature and token limits for customized responses. | Input: Accepts a chat message array with conversation history, optional model selection (Venice Uncensored or Llama 3.3 70B), temperature setting for response creativity (0-2), and maximum token limit. | Output: Returns the AI's response text, model name used, token usage statistics (prompt/completion/total tokens), and a success confirmation with timestamp. | Use cases: I want to chat with an AI that doesn't have content restrictions; Get detailed responses on controversial or sensitive topics; I need an unrestricted language model for research or creative writing; Switch between different AI models like Llama 3.3 for specific tasks
This action will execute the specified operation when your workflow runs.
This integration requires a connector to be configured before it can be used in workflows.
Maximum tokens in response (default: 2048)
Chat messages array [{role: 'user'|'assistant', content: 'message'}]
Model ID (default: venice-uncensored). Available: venice-uncensored, llama-3.3-70b, etc.
Response creativity 0-2 (default: 0.7)
Payment details from x402 facilitator
Fields
amountPaidAmount paid in smallest token unitassetToken address used for paymentnetworkNetwork where payment was made (e.g., 'base')payerPayer wallet addresstransactionTransaction hash (may be null if server doesn't return x-payment-response header)The endpoint response data. Access fields with {{nodeId.result.resource.fieldName}}
Response Fields
datasuccessWhether the request succeeded
HTTP status code