Skip to main content

Overview

TransportParams is the base configuration class for all Pipecat transports. It controls audio input/output settings, video settings, and voice activity detection. Every transport’s params class (DailyParams, LiveKitParams, WebsocketServerParams, etc.) inherits from TransportParams. You typically pass these via the transport’s params argument:
from pipecat.transports.daily import DailyTransport, DailyParams

transport = DailyTransport(
    room_url,
    token,
    "Bot",
    params=DailyParams(
        audio_in_enabled=True,
        audio_out_enabled=True,
        audio_out_sample_rate=24000,
        video_out_enabled=True,
        video_out_width=1280,
        video_out_height=720,
    ),
)

Audio Output

audio_out_enabled
bool
default:"False"
Enable audio output streaming.
audio_out_sample_rate
int
default:"None"
Output audio sample rate in Hz. When None, uses the default rate from the TTS service.
audio_out_channels
int
default:"1"
Number of output audio channels.
audio_out_bitrate
int
default:"96000"
Output audio bitrate in bits per second.
audio_out_10ms_chunks
int
default:"4"
Number of 10ms chunks to buffer before sending output audio. Higher values increase latency but reduce overhead.
audio_out_mixer
BaseAudioMixer | Mapping[str, BaseAudioMixer]
default:"None"
Audio mixer instance for combining audio streams, or a mapping of destination names to mixer instances.
audio_out_destinations
List[str]
default:"[]"
List of audio output destination identifiers for routing audio to specific participants or endpoints.
audio_out_end_silence_secs
int
default:"2"
Seconds of silence to send after an EndFrame. Set to 0 to disable.
audio_out_auto_silence
bool
default:"True"
Insert silence frames when the audio output queue is empty. When False, the transport waits for audio data instead of inserting silence, which is useful for scenarios that require uninterrupted audio playback without artificial gaps.

Audio Input

audio_in_enabled
bool
default:"False"
Enable audio input streaming.
audio_in_sample_rate
int
default:"None"
Input audio sample rate in Hz. When None, uses the transport’s native rate.
audio_in_channels
int
default:"1"
Number of input audio channels.
audio_in_filter
BaseAudioFilter
default:"None"
Audio filter to apply to incoming audio (e.g., noise suppression).
audio_in_stream_on_start
bool
default:"True"
Start audio input streaming immediately when the transport starts. Set to False to manually control when audio input begins.
audio_in_passthrough
bool
default:"True"
Pass input audio frames downstream through the pipeline. When False, audio is consumed by VAD but not forwarded.

Video Output

video_out_enabled
bool
default:"False"
Enable video output streaming.
video_out_is_live
bool
default:"False"
Enable real-time video output. When True, frames are sent as they arrive rather than buffered.
video_out_width
int
default:"1024"
Video output width in pixels.
video_out_height
int
default:"768"
Video output height in pixels.
video_out_bitrate
int
default:"800000"
Video output bitrate in bits per second.
video_out_framerate
int
default:"30"
Video output frame rate in frames per second.
video_out_color_format
str
default:"RGB"
Video output color format string.
video_out_codec
str
default:"None"
Preferred video codec for output (e.g., VP8, H264, H265).
video_out_destinations
List[str]
default:"[]"
List of video output destination identifiers.

Video Input

video_in_enabled
bool
default:"False"
Enable video input streaming.

Deprecated Parameters

The following parameters are deprecated and should not be used in new code. See the migration guidance for each parameter.
vad_analyzer
VADAnalyzer
default:"None"
Voice Activity Detection analyzer instance.Deprecated in 0.0.101. Use LLMUserAggregator’s vad_analyzer parameter, or VADProcessor if no LLMUserAggregator is needed.Migration example:
# Old (deprecated)
transport = DailyTransport(
    room_url, token, "Bot",
    params=DailyParams(
        audio_in_enabled=True,
        vad_analyzer=SileroVADAnalyzer(),
    )
)

# New (recommended with LLMUserAggregator)
from pipecat.processors.aggregators.llm_response_universal import (
    LLMContextAggregatorPair,
    LLMUserAggregatorParams,
)

transport = DailyTransport(
    room_url, token, "Bot",
    params=DailyParams(audio_in_enabled=True)
)

user_aggregator, assistant_aggregator = LLMContextAggregatorPair(
    context,
    user_params=LLMUserAggregatorParams(
        vad_analyzer=SileroVADAnalyzer()
    ),
)

# Or use VADProcessor if no LLMUserAggregator is needed
from pipecat.processors.audio.vad_processor import VADProcessor

transport = DailyTransport(
    room_url, token, "Bot",
    params=DailyParams(audio_in_enabled=True)
)

vad_processor = VADProcessor(vad_analyzer=SileroVADAnalyzer())
pipeline = Pipeline([transport.input(), vad_processor, ...])
turn_analyzer
BaseTurnAnalyzer
default:"None"
Turn-taking analyzer instance for conversation management.Deprecated in 0.0.99. Use LLMUserAggregator’s user_turn_strategies parameter instead.

Transport Subclasses

Each transport extends TransportParams with provider-specific fields:
TransportParams ClassAdditional Fields
DailyTransportDailyParamsapi_key, api_url, dialin_settings, transcription_enabled, transcription_settings
LiveKitTransportLiveKitParams(no additional fields)
SmallWebRTCTransportTransportParamsUses base class directly
WebsocketServerTransportWebsocketServerParamsadd_wav_header, serializer, session_timeout
FastAPIWebsocketTransportFastAPIWebsocketParamsserializer, session_timeout