.Make sure compatibility along with various frameworks, including.NET 6.0,. Internet Platform 4.6.2, and.NET Standard 2.0 as well as above.Reduce addictions to prevent version disagreements and the demand for tiing redirects.Transcribing Audio Record.One of the primary functions of the SDK is audio transcription. Developers may translate audio data asynchronously or in real-time. Below is actually an instance of just how to translate an audio file:.making use of AssemblyAI.making use of AssemblyAI.Transcripts.var customer = brand-new AssemblyAIClient(" YOUR_API_KEY").var records = wait for client.Transcripts.TranscribeAsync( new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For local documents, similar code may be utilized to achieve transcription.await making use of var stream = new FileStream("./ nbc.mp3", FileMode.Open).var records = wait for client.Transcripts.TranscribeAsync(.flow,.new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Audio Transcription.The SDK likewise supports real-time audio transcription using Streaming Speech-to-Text. This attribute is particularly valuable for applications calling for quick handling of audio records.using AssemblyAI.Realtime.await using var transcriber = brand-new RealtimeTranscriber( brand new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( records =>Console.WriteLine($" Last: transcript.Text "). ).wait for transcriber.ConnectAsync().// Pseudocode for obtaining sound from a microphone for instance.GetAudio( async (portion) => wait for transcriber.SendAudioAsync( part)).await transcriber.CloseAsync().Using LeMUR for LLM Functions.The SDK incorporates with LeMUR to allow developers to develop huge language model (LLM) functions on vocal information. Here is actually an instance:.var lemurTaskParams = new LemurTaskParams.Cue="Offer a brief review of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var response = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Sound Knowledge Versions.Furthermore, the SDK includes built-in support for audio knowledge styles, enabling feeling study and also other enhanced functions.var transcript = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = true. ).foreach (var result in transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// GOOD, NEUTRAL, or even NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, check out the main AssemblyAI blog.Image source: Shutterstock.