Skip to end of metadata
Go to start of metadata

API Best Practices

We're thrilled that you're taking advantage of our platform by grabbing raw data, hooking in your own pieces of the ad serving puzzle, or otherwise building on top of our infrastructure. There are a few ground rules that will make sure you have the best experience possible, and keep your applications healthy as our API evolves. Please stay in touch with your implementation consultant as you get started building.

On This Page


Retrieve only the objects you need

GET multiple objects by ID

Most services support the retrieval of multiple specific objects by ID. To do this, you append a comma-separated list of IDs to the query string. For example, the following request would return only the publishers with the specified IDs.

Filter your results

Filtering allows you to specify a subset of objects to be returned. For example, the following call would return only line items that have the "active" state:

For fields of the type int, double, date, or money, you can append min_ or max_ to the filter. For example, the following request would return only line items that have been modified on or after January 1, 2013:.

Paginate your results

You should write your application to take advantage of our pagination support. You can paginate results by specifying start_element and num_elements in the query string of the GET request. For example, the following request would return the first 50 objects in the response:

To retrieve the next 50, you would simply set start_element=50.

The maximum number of objects that can be returned, regardless of pagination, is 100. Please note that if you request over 100 objects, we will only return the first 100 and will not provide an error message.

Throttle your calls

There are limits on the number of requests you can make against our APIs per minute. We categorize these rate limits into read and write requests. Currently, the default read and write limit is 1000 per minute. These counters will reset at the end of the minute. If you exceed the throttling limit, the API will respond with the HTTP 429 (Too Many Requests) response code along with an error message in the response contents. We also return a response header with a  Retry-After field that specifies the number of seconds to wait before attempting more API calls.

If you make API calls using curl, you can retrieve the response header by including the -v parameter in your request.

In this case, you can retry your request in 31 seconds (the value of the Retry-After field).

If you're calling the API from a script, you should check for the 429 response code when you make your API calls. If you receive this code, sleep for the time returned by the Retry-After field of the response header. After sleeping for the specified amount of time, your script can continue on.

You can also find rate limit information in dbg_info on every call, although this method of checking rate limit status is not as reliable as using the response header.

See API Usage Constraints for more details on rate limits.

Update Arrays with "append=true"

When updating array values with the API, an object's array values will be overwritten with whatever values are provided in the PUT request. This is fine if the intended behavior is to clear out an array's values and replace it with your updated data. However, a flag can be used to append data to an array rather than replace it. This is particularly useful when updating very lengthy arrays. The query parameter append=true can be added to a PUT request to set an update to append mode.

For our example, say that we had a simple Profile Object with the following country_targets array:

Profile Object, before Update

If we were to use append=true in the PUT call to update to this object, we could use the following JSON data without fear of overwriting our profile's existing country_targets data:

JSON Update data

We would use the following CURL command (replacing <profile_ID> with the appropriate value)

CURL example

 As a result. our profile object would be updated to reflect the following:

Resulting Profile Object

Use a config-driven API end-point

Make sure that you can change the API base URL easily. In the example below, the API URL is defined as a variable and can be used throughout the code base. If that URL should ever need to change, it can be modified in a single location.

Build an API wrapper

Centralizing the code where you send requests and handle responses is a great practice. This will allow you to do logging, error handling and more in one location.

Keep your reports lean and focused

Here are some tips for preventing your reports from being unnecessarily large or taking a long time to process:

  • Shorten the report_interval (i.e., from lifetime to last_48_hours)
  • Add more higher-level filters (i.e., for a specific publisher, advertiser, line item, etc.)
  • Avoid combining granular buy-side and sell-side dimensions (i.e., creative and placement), as this increases the number of rows exponentially. If you need to report on such combinations, consider using Bulk Reporting Feeds or Log-Level Data Feeds

If you must pull very large reports, use the instructions in Report Pagination.

Allow for additional fields on responses

As our API team implements new features, it is necessary to include new fields on various API services. Your integration should be flexible enough to handle additional fields on each service that were not previously returned.

Be aware of breaking changes

Our services change continually as we add new features, but we do our best to create stability so that the applications our clients build on top of our API can adapt gracefully as well.

When we introduce a breaking change, we will support two versions of the API in production, one with and one without the breaking change, for 60 days. We will announce these changes in our API Release Notes. For more details about what constitutes a breaking change, see our Breaking Changes policy.

When two versions of the API are being supported for 60 days, the breaking change will be implemented in the newer version.

  • In order to access the current version (with no breaking changes) use a format like:
  • In order to access the newer version (with the breaking change features) use a format like: 

Test your implementation in the Client Testing environment

Always test your initial API integration, as well as any subsequent updates, in the Client Testing environment. Doing so will allow you to identify any unexpected behavior in a safe environment before migrating to Production, where bugs in your code may have a real cost impact. This will also ensure that your development efforts do not interfere with any stable applications you have running in Production (for instance, by using up your API limit.)  The API endpoint for the Client-Testing environment is . Remember to ensure that you have the ability to roll back changes in case you do in fact encounter unexpected behavior in Production.

Be mindful of object limits

We limit the number of objects each member can create and use on the platform. This limit includes inactive and unused objects (such as line items set to inactive status, placements that have never served an impression, and so on). You should use the Object Limit Service to view your limits and proactively monitor your usage.

Be mindful of your process scheduling

If possible, schedule your processes so that they do not overlap with each other. If there is no business need to perform your bulk operations during business hours, try to schedule these processes on off-peak hours so that you maximize your API usage throughout the day. Remember, you are allotted a certain amount of READ and WRITE calls per minute. Try to take advantage of the times at which you are not making any calls to the API so that you have additional headroom at the times that you need it, and prioritize your time-sensitive operations.

Read the entire API Wiki before using the API

There are many tips, tricks, and examples throughout the API Wiki that will be useful in developing your integration.


Don't assume an API call was successful

All successful API calls will receive a response containing a "status" of "OK". If the response does not contain this status, the call failed for some reason. If the "status" is "error", an error message will be included in the response. Below is an example of a successful response.

Don't rely on default fields

It's best practice to pass the field values that you want rather than relying on default field values. If default values change, and you are relying on defaults, you may experience unexpected results.

Don't make unnecessary updates

When updating objects, you can avoid making unnecessary updates by passing only the fields that are changing to the API. A good way to ensure this practice is to cache your GET calls, compare the cache to the changes you want to make, and then make PUT calls only for what's different.

If you need to update all of the objects in a set - for instance, updating the cost_cpm on all placements on a site - you should not iterate through each of the objects blindly making PUT calls. Instead, issue a GET call to retrieve the current state for each of the objects in your set. Where possible, be sure to use the filtering and sorting functionality documented in API Semantics to retrieve only the objects which will need an update. Compare the current state of the returned objects to the desired state, and issue a PUT only for those objects which actually require an update. 

Don't authenticate unnecessarily

When you authenticate, your token remains valid for 2 hours. You do not need to re-authenticate within this time. If you do re-authenticate, please note the following limitation: The API permits you to authenticate successfully 10 times per 5-minute period. Any subsequent authentication attempts within those 5 minutes will result in an error.

It is best practice to listen for the "NOAUTH" error_id in your call responses and re-authenticate only after receiving it.

For authentication instructions, see Authentication Service.

Don't pull all reports at the same time

This can cause the reporting backend to be overloaded, resulting in delayed reports, and can even impact reports that are run later in the day. For more information see see the Report Throttling section of the Report Service page.

Don't make bulk requests to the reporting service

If your architecture calls for multiple requests from the reporting service per hour or day, investigate higher-level reports with more data to see if you can get the data you need with fewer calls to the API. 

For instance, if you are requesting reports once per hour for every advertiser and publisher on your network, you should investigate whether using the Network Analytics Report - rather than individual requests for the Advertiser Analytics or Publisher Analytics reports - would fulfill your needs and match your use-case better. 

For more information on all of the available reports and their fields, see the API documentation on the Reporting Service.

If you find that higher level reports do not fulfill your needs, consider making use of the Bulk Reporting Feeds or Log-Level Data Feeds.

  • No labels