curl -X POST \ --header "Content-Type: application/json" \ --header "Accept: application/json" \ --header "Authorization: Bearer YOUR_API_KEY" \ --data '{"name":"Test Profile","default_timeout_secs":300,"messaging_enabled":true}' \ https://api.telnyx.com/v2/verify_profiles Icon...
requestId // see: https://telnyx.com/docs/api/node#request_ids messagingProfile.lastResponse.statusCode request and response events The Telnyx object emits request and response events. You can use them like this: const telnyx = require('telnyx')('KEY...'); const onRequest = (request) =...
$ npmtest-- test/Error.test.ts -t'Populates with type' If you wish, you may run tests using your TelnyxTestAPI key by setting the environment variableTELNYX_TEST_API_KEYbefore running the tests: $exportTELNYX_TEST_API_KEY='KEY...'$exportTELNYX_MOCK_PORT='12...'$ npmtest Debugging T...
@@ -42,7 +42,14 @@ public interface ICallsApi [Post("/v2/calls/{callControlId}/actions/speak")] Task SpeakTextAsync(string callControlId, [Body] SpeakTextRequest request, CancellationToken cancellationToken = default); [Get("/v2/calls/{callControlId}")] Task<TelnyxResponse<CallStatus>> ...
Install the package with: npm install @telnyx/node-red-telnyx Usage telnyx-sms The package needs to be configured with your account's API key and some other details you can find in your theTelnyx Mission Control Portal. Both ways of sending SMS are supported: ...
llm_api: str, model: str, test_timeout_s: int, max_num_completed_requests: int, num_concurrent_requests: int, additional_sampling_params: str, results_dir: str, user_metadata: Dict[str, str], ): """ Args: llm_api: The type of request to make. Either "chat" or "litellm". mod...
This application is a simple chat interface that integrates with the Telnyx API to enable users to have conversations with an AI model. It is built using Python's Tkinter library for the GUI and uses threading to handle API requests without freezing the UI. This application is a simple chat...
API_URL = "https://api.telnyx.com/v2/ai/generate_stream" # Define the main application class class ChatApplication(tk.Tk): def __init__(self): super().__init__() self.title("AI Chat Interface") # Set the window title self.geometry("800x600") # Set the window size # Initialize...
The load test spawns a number of concurrent requests to the LLM API and measures the inter-token latency and generation throughput per request and across concurrent requests. The prompt that is sent with each request is of the format:
Provide API base and key in .env file. Check out env_sample.txt 2. Test out Anyscale Endpoint with following command by sending 20 requests @@ -10,4 +60,5 @@ 4. Control sleep between rounds to avoid hitting rate limit `python llmval.py -r 20 -f fireworks -m "accounts/fireworks/...