Access Venice Uncensored 1.1, an unrestricted AI chat model via PadelMaps that handles any topic without content filtering. Supports multiple model options (Venice Uncensored, Llama 3.3 70B) with configurable temperature and token limits for customized responses. | Input: Accepts a chat message array with conversation history, optional model selection (Venice Uncensored or Llama 3.3 70B), temperature setting for response creativity (0-2), and maximum token limit. | Output: Returns the AI's response text, model name used, token usage statistics (prompt/completion/total tokens), and a success confirmation with timestamp. | Use cases: I want to chat with an AI that doesn't have content restrictions; Get detailed responses on controversial or sensitive topics; I need an unrestricted language model for research or creative writing; Switch between different AI models like Llama 3.3 for specific tasks | Cost: 0.05 USDC on base
This action will execute the specified operation when your workflow runs.
This integration requires a connector to be configured before it can be used in workflows.