Python SDK for the Xpoz social media intelligence platform. Query Twitter/X, Instagram, Reddit, and TikTok data through a simple, Pythonic interface.
Installation
pip install xpoz
Requires Python 3.10+.
Get an API Key
Sign up and get your token at https://xpoz.ai/get-token.
Once you have it, pass it directly or set the XPOZ_API_KEY environment variable:
export XPOZ_API_KEY=your-token-here
What is Xpoz?
Xpoz provides unified access to social media data across Twitter/X, Instagram, Reddit, and TikTok. The platform indexes billions of posts, user profiles, and engagement metrics — making it possible to search, analyze, and export social media data at scale.
The SDK wraps Xpoz's MCP server, abstracting away transport, authentication, operation polling, and pagination into a clean developer-friendly API.
Features
37 data methods across Twitter, Instagram, Reddit, and TikTok
Sync and async clients —
XpozClientandAsyncXpozClientAutomatic operation polling — long-running queries are abstracted away
Response modes —
ResponseType.FASTfor quick limited results,PAGINGfor full pagination,CSVfor exportServer-side pagination —
PaginatedResultwithnext_page(),get_page(n)CSV export —
export_csv()on any paginated resultField selection — request only the fields you need in Pythonic snake_case
Pydantic v2 models — fully typed results with autocomplete support
Namespaced API —
client.twitter.*,client.instagram.*,client.reddit.*,client.tiktok.*
Quick Start
from xpoz import XpozClient client = XpozClient("your-api-key") user = client.twitter.get_user("elonmusk") print(f"{user.name} — {user.followers_count:,} followers") results = client.twitter.search_posts("artificial intelligence", start_date="2025-01-01") for tweet in results.data: print(tweet.text, tweet.like_count) client.close()Authentication
Get your API key at https://xpoz.ai/get-token, then use it as follows:
# Pass API key directly client = XpozClient("your-api-key") # Or use XPOZ_API_KEY environment variable import os os.environ["XPOZ_API_KEY"] = "your-api-key" client = XpozClient() # Custom server URL (also reads XPOZ_SERVER_URL env var) client = XpozClient("your-api-key", server_url="https://xpoz.ai/mcp") # Custom operation timeout (default: 300 seconds) client = XpozClient("your-api-key", timeout=600)Context Manager
# Sync — auto-closes on exit with XpozClient("your-api-key") as client: user = client.twitter.get_user("elonmusk") # Async import asyncio from xpoz import AsyncXpozClient async def main(): async with AsyncXpozClient("your-api-key") as client: user = await client.twitter.get_user("elonmusk") results = await client.twitter.search_posts("AI") page2 = await results.next_page() asyncio.run(main())Pagination
Methods that return large datasets use server-side pagination (100 items per page). These return a PaginatedResult[T] with built-in helpers:
results = client.twitter.search_posts("AI") results.data # list[TwitterPost] — current page results.pagination.total_rows # total matching rows results.pagination.total_pages # total pages results.pagination.page_number # current page number results.pagination.page_size # items per page (100) results.pagination.results_count # items on current page results.has_next_page() # bool # Navigate pages page2 = results.next_page() # fetch next page page5 = results.get_page(5) # jump to specific page # Export to CSV csv_url = results.export_csv() # returns download URLResponse Modes
Methods that return PaginatedResult support a response_type parameter to control how results are fetched. Import the ResponseType enum:
from xpoz import XpozClient, ResponseType
Fast mode (default)
Returns up to limit results directly — no polling, no pagination. This is the default behavior when response_type is not specified.
results = client.twitter.search_posts( "bitcoin", limit=10, fields=["id", "text", "like_count"] ) # Equivalent to response_type=ResponseType.FAST for tweet in results.data: print(tweet.text)
Paging mode
Returns full paginated results (100 items per page). Use this when you need to iterate through all results.
results = client.twitter.search_posts( "bitcoin", response_type=ResponseType.PAGING, )
CSV mode
Triggers a server-side CSV export. The result contains no inline data — call export_csv() to get the download URL.
results = client.twitter.search_posts( "bitcoin", response_type=ResponseType.CSV, ) csv_url = results.export_csv()
Supported methods
response_type and limit are available on:
Platform | Method |
| |
| |
| |
| |
| |
| |
| |
TikTok |
|
TikTok |
|
TikTok |
|
Field Selection
All methods accept a fields parameter. Use snake_case — the SDK translates to camelCase automatically.
# Only fetch the fields you need (faster + less memory) results = client.twitter.search_posts( "AI", fields=["id", "text", "like_count", "retweet_count", "created_at_date"] ) user = client.twitter.get_user( "elonmusk", fields=["id", "username", "name", "followers_count", "description"] )
Requesting fewer fields significantly improves response time.
Query Syntax
The query parameter on all search_* and get_*_by_keywords methods supports a Lucene-style full-text syntax across Twitter, Instagram, and Reddit.
Exact phrase
Wrap in double quotes to require an exact match:
"machine learning" "climate change"
Keywords (any word)
Space-separated terms without quotes match posts containing any of the words:
AI crypto blockchain
Boolean operators
Use AND, OR, NOT (case-insensitive). A bare space is treated as OR — be explicit:
"deep learning" AND python tensorflow OR pytorch climate NOT politics
Grouping with parentheses
(AI OR "artificial intelligence") AND ethics (startup OR entrepreneur) NOT "venture capital"
Combined example
results = client.twitter.search_posts( '("machine learning" OR "deep learning") AND python NOT spam', start_date="2025-01-01", language="en", )Note: Do not use from:, lang:, since:, or until: in the query string — use the dedicated parameters (author_username, language, start_date, end_date) instead.
Error Handling
from xpoz import ( XpozError, AuthenticationError, ConnectionError, OperationTimeoutError, OperationFailedError, OperationCancelledError, NotFoundError, ValidationError, ) try: user = client.twitter.get_user("nonexistent_user_12345") except OperationFailedError as e: print(f"Operation {e.operation_id} failed: {e.error}") except OperationTimeoutError as e: print(f"Timed out after {e.elapsed_seconds}s") except AuthenticationError: print("Invalid API key") except XpozError as e: print(f"Xpoz error: {e}")API Reference
Twitter — client.twitter
get_user(identifier, identifier_type="username", *, fields) -> TwitterUser
Get a single Twitter user profile.
# By username (default) user = client.twitter.get_user("elonmusk") # By numeric ID user = client.twitter.get_user("44196397", identifier_type="id")search_users(name, *, limit=None, fields) -> list[TwitterUser]
Search users by name or username. Returns up to 10 results.
users = client.twitter.search_users("elon")get_user_connections(username, connection_type, *, fields, force_latest) -> PaginatedResult[TwitterUser]
Get followers or following for a user.
followers = client.twitter.get_user_connections("elonmusk", "followers") following = client.twitter.get_user_connections("elonmusk", "following")get_users_by_keywords(query, *, fields, start_date, end_date, language, force_latest, response_type, limit) -> PaginatedResult[TwitterUser]
Find users who authored posts matching a keyword query. Includes aggregation fields like relevant_tweets_count, relevant_tweets_likes_sum.
users = client.twitter.get_users_by_keywords( '"machine learning"', fields=["username", "name", "followers_count", "relevant_tweets_count", "relevant_tweets_likes_sum"] )
get_posts_by_ids(post_ids, *, fields, force_latest) -> list[TwitterPost]
Get 1-100 posts by their IDs.
tweets = client.twitter.get_posts_by_ids(["1234567890", "0987654321"])
get_posts_by_author(identifier, identifier_type="username", *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[TwitterPost]
Get all posts by an author with optional date filtering.
results = client.twitter.get_posts_by_author("elonmusk", start_date="2025-01-01")search_posts(query, *, fields, start_date, end_date, author_username, author_id, language, force_latest, response_type, limit) -> PaginatedResult[TwitterPost]
Full-text search with filters. Supports exact phrases ("machine learning"), boolean operators (AI AND python), and parentheses.
results = client.twitter.search_posts( '"artificial intelligence" AND ethics', start_date="2025-01-01", end_date="2025-06-01", language="en", fields=["id", "text", "like_count", "author_username", "created_at_date"] )
get_retweets(post_id, *, fields, start_date) -> PaginatedResult[TwitterPost]
Get retweets of a specific post (database only).
retweets = client.twitter.get_retweets("1234567890")get_quotes(post_id, *, fields, start_date, force_latest) -> PaginatedResult[TwitterPost]
Get quote tweets of a specific post.
quotes = client.twitter.get_quotes("1234567890")get_comments(post_id, *, fields, start_date, force_latest) -> PaginatedResult[TwitterPost]
Get replies to a specific post.
comments = client.twitter.get_comments("1234567890")get_post_interacting_users(post_id, interaction_type, *, fields, force_latest) -> PaginatedResult[TwitterUser]
Get users who interacted with a post. interaction_type: "commenters", "quoters", "retweeters".
commenters = client.twitter.get_post_interacting_users("1234567890", "commenters")count_posts(phrase, *, start_date, end_date) -> int
Count tweets containing a phrase within a date range.
count = client.twitter.count_posts("bitcoin", start_date="2025-01-01") print(f"{count:,} tweets mention bitcoin")Instagram — client.instagram
get_user(identifier, identifier_type="username", *, fields) -> InstagramUser
user = client.instagram.get_user("instagram") print(f"{user.full_name} — {user.follower_count:,} followers")search_users(name, *, limit=None, fields) -> list[InstagramUser]
users = client.instagram.search_users("nasa")get_user_connections(username, connection_type, *, fields, force_latest) -> PaginatedResult[InstagramUser]
followers = client.instagram.get_user_connections("instagram", "followers")get_users_by_keywords(query, *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[InstagramUser]
users = client.instagram.get_users_by_keywords('"sustainable fashion"')get_posts_by_ids(post_ids, *, fields, force_latest) -> list[InstagramPost]
Post IDs must be in strong_id format: "media_id_user_id" (e.g. "3606450040306139062_4836333238").
posts = client.instagram.get_posts_by_ids(["3606450040306139062_4836333238"])
get_posts_by_user(identifier, identifier_type="username", *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[InstagramPost]
results = client.instagram.get_posts_by_user("nasa")search_posts(query, *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[InstagramPost]
results = client.instagram.search_posts("travel photography")get_comments(post_id, *, fields, start_date, end_date, force_latest) -> PaginatedResult[InstagramComment]
comments = client.instagram.get_comments("3606450040306139062_4836333238")get_post_interacting_users(post_id, interaction_type, *, fields, force_latest) -> PaginatedResult[InstagramUser]
interaction_type: "commenters", "likers".
likers = client.instagram.get_post_interacting_users("3606450040306139062_4836333238", "likers")Reddit — client.reddit
get_user(username, *, fields) -> RedditUser
user = client.reddit.get_user("spez") print(f"{user.username} — {user.total_karma:,} karma")search_users(name, *, limit=None, fields) -> list[RedditUser]
users = client.reddit.search_users("spez")get_users_by_keywords(query, *, fields, start_date, end_date, subreddit, force_latest) -> PaginatedResult[RedditUser]
users = client.reddit.get_users_by_keywords('"machine learning"', subreddit="MachineLearning")search_posts(query, *, fields, start_date, end_date, sort, time, subreddit, force_latest, response_type, limit) -> PaginatedResult[RedditPost]
sort: "relevance", "hot", "top", "new", "comments". time: "hour", "day", "week", "month", "year", "all".
results = client.reddit.search_posts( "python tutorial", subreddit="learnpython", sort="top", time="month" )
get_post_with_comments(post_id, *, post_fields, comment_fields, force_latest) -> RedditPostWithComments
Returns a composite object with the post and its paginated comments.
result = client.reddit.get_post_with_comments("abc123") print(result.post.title) for comment in result.comments: print(f" {comment.author_username}: {comment.body[:80]}")search_comments(query, *, fields, start_date, end_date, subreddit) -> PaginatedResult[RedditComment]
comments = client.reddit.search_comments("helpful tip", subreddit="LifeProTips")search_subreddits(query, *, limit=None, fields) -> list[RedditSubreddit]
subs = client.reddit.search_subreddits("machine learning")get_subreddit_with_posts(subreddit_name, *, subreddit_fields, post_fields, force_latest) -> SubredditWithPosts
result = client.reddit.get_subreddit_with_posts("wallstreetbets") print(f"r/{result.subreddit.display_name} — {result.subreddit.subscribers_count:,} members") for post in result.posts: print(f" {post.title} ({post.score} points)")get_subreddits_by_keywords(query, *, fields, start_date, end_date, force_latest) -> PaginatedResult[RedditSubreddit]
subs = client.reddit.get_subreddits_by_keywords("cryptocurrency")TikTok — client.tiktok
get_user(identifier, identifier_type="username", *, fields) -> TiktokUser
user = client.tiktok.get_user("charlidamelio") print(f"{user.nickname} — {user.follower_count:,} followers") # By numeric ID user = client.tiktok.get_user("123456789", identifier_type="id")search_users(name, *, limit=None, fields) -> list[TiktokUser]
users = client.tiktok.search_users("charli") top_five = client.tiktok.search_users("charli", limit=5)get_users_by_keywords(query, *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[TiktokUser]
users = client.tiktok.get_users_by_keywords( '"machine learning"', response_type=ResponseType.FAST, limit=20, )
get_posts_by_ids(post_ids, *, fields, force_latest) -> list[TiktokPost]
posts = client.tiktok.get_posts_by_ids(["7123456789012345678"])
get_posts_by_user(identifier, identifier_type="username", *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[TiktokPost]
results = client.tiktok.get_posts_by_user("charlidamelio", start_date="2025-01-01")search_posts(query, *, fields, start_date, end_date, force_latest, response_type, limit) -> PaginatedResult[TiktokPost]
results = client.tiktok.search_posts( "travel vlog", start_date="2025-01-01", response_type=ResponseType.FAST, limit=30, )
get_comments(post_id, *, fields, start_date, end_date, force_latest) -> PaginatedResult[TiktokComment]
comments = client.tiktok.get_comments("7123456789012345678")Type Models
All models are Pydantic v2 BaseModel subclasses with extra="allow" (unknown fields are preserved, not rejected). All fields are optional and default to None.
TwitterPost
Field | Type | Description |
|
| Post ID |
|
| Post text content |
|
| Author's user ID |
|
| Author's username |
|
| Number of likes |
|
| Number of retweets |
|
| Number of replies |
|
| Number of quotes |
|
| Number of impressions |
|
| Number of bookmarks |
|
| Language code |
|
| Hashtags in tweet |
|
| Mentioned usernames |
|
| Media attachment URLs |
|
| URLs in tweet |
|
| Country (if geo-tagged) |
|
| Creation timestamp |
|
| Creation date (YYYY-MM-DD) |
|
| Thread conversation ID |
|
| ID of quoted tweet |
|
| ID of parent tweet |
|
| Whether this is a retweet |
|
| Sensitive content flag |
TwitterUser
Field | Type | Description |
|
| User ID |
|
| Username (handle) |
|
| Display name |
|
| Bio text |
|
| Location string |
|
| Verification status |
|
| Verification type |
|
| Number of followers |
|
| Number of following |
|
| Total tweets |
|
| Total likes |
|
| Profile picture URL |
|
| Account creation timestamp |
|
| Account location |
|
| Inauthenticity flag |
|
| Inauthenticity probability |
|
| Tweeting frequency |
InstagramPost
Field | Type | Description |
|
| Post ID (strong_id format) |
|
| Post caption |
|
| Author username |
|
| Author display name |
|
| Number of likes |
|
| Number of comments |
|
| Number of reshares |
|
| Video play count |
|
| Media type |
|
| Image URL |
|
| Video URL |
|
| Creation date |
InstagramUser
Field | Type | Description |
|
| User ID |
|
| Username |
|
| Display name |
|
| Bio text |
|
| Private account |
|
| Verified status |
|
| Followers |
|
| Following |
|
| Total posts |
|
| Profile picture URL |
InstagramComment
Field | Type | Description |
|
| Comment ID |
|
| Comment text |
|
| Author username |
|
| Parent post ID |
|
| Number of likes |
|
| Reply count |
|
| Creation date |
RedditPost
Field | Type | Description |
|
| Post ID |
|
| Post title |
|
| Post body text |
|
| Author username |
|
| Subreddit name |
|
| Net score |
|
| Upvote count |
|
| Comment count |
|
| Post URL |
|
| Reddit permalink |
|
| Self post (text only) |
|
| NSFW flag |
|
| Creation date |
RedditUser
Field | Type | Description |
|
| User ID |
|
| Username |
|
| Total karma |
|
| Link karma |
|
| Comment karma |
|
| Reddit Gold status |
|
| Moderator status |
|
| Profile bio |
|
| Account creation date |
RedditComment
Field | Type | Description |
|
| Comment ID |
|
| Comment text |
|
| Author username |
|
| Parent post ID |
|
| Net score |
|
| Nesting depth |
|
| Is OP |
|
| Creation date |
RedditSubreddit
Field | Type | Description |
|
| Subreddit ID |
|
| Subreddit name |
|
| Subreddit title |
|
| Short description |
|
| Full description |
|
| Subscriber count |
|
| Active users |
|
| NSFW flag |
|
| Creation date |
TiktokPost
Field | Type | Description |
|
| Post ID |
|
| Post caption/description |
|
| Language of description |
|
| Author user ID |
|
| Author username |
|
| Author display name |
|
| Number of likes |
|
| Number of comments |
|
| Video play count |
|
| Number of collects/saves |
|
| Number of downloads |
|
| Number of forwards/shares |
|
| Thumbnail URL |
|
| Post type code |
|
| Private post flag |
|
| Creation timestamp |
|
| Creation date (YYYY-MM-DD) |
TiktokUser
Field | Type | Description |
|
| User ID |
|
| Username |
|
| Display name |
|
| Bio text |
|
| Secure user ID |
|
| Profile picture URL |
|
| Private account |
|
| Verified status |
|
| Number of followers |
|
| Number of following |
|
| Total likes received |
|
| Total posts |
|
| Profile language |
|
| Account region |
|
| Account creation date |
TiktokComment
Field | Type | Description |
|
| Comment ID |
|
| Parent post ID |
|
| Author user ID |
|
| Author username |
|
| Comment text |
|
| Number of likes |
|
| Creation timestamp |
|
| Creation date (YYYY-MM-DD) |
Composite Types
RedditPostWithComments — returned by get_post_with_comments():
post: RedditPostcomments: list[RedditComment]comments_pagination: PaginationInfo | None
SubredditWithPosts — returned by get_subreddit_with_posts():
subreddit: RedditSubredditposts: list[RedditPost]posts_pagination: PaginationInfo | None
Environment Variables
Variable | Description | Default |
| API key for authentication | — |
| MCP server URL |
Testing
Tests hit the live Xpoz API and require a valid API key:
XPOZ_API_KEY=your-api-key pytest tests/ -v
Tests must run sequentially in a single process to avoid API rate limiting. Do not run multiple pytest processes in parallel.
License
MIT
