Two weeks ago, right before SatSummit, we hosted a “Satellite Tasking API Sprint” at our office in Alexandria. The goal was to initiate a conversation around standardizing how users submit tasking requests to data providers. The companies attending, made up of data providers, analytics users, and developers, represented a wide and deep knowledge base around existing tasking capabilities. Coming into a sprint like this, people often have solutions in mind, but I think everyone came out with some new ideas and use-cases to consider.
We want to thank everyone who attended, it was a great turnout. After a couple years of reduced face-time, it was nice to see how well it works just getting people together in a room.
After a round of welcomes and introductions, we had a series of lightning talks, where attendees were able to present on their current efforts, experiences, and thoughts on developing a tasking API.
- Alex (Orbit Logic), Luis and Paulo (SkyWatch), and Derek (BlackSky) presented work they’ve done on creating tasking apps and APIs giving a good overview of existing capabilities for unifying multiple data providers.
- Scott (OGC) and Drew (Botts Innovative Research) talked about ongoing OGC efforts and an overview and status of OpenSensorHub and the Connected Systems API.
- Representing analytics users, Eric and Mark (Ursa) and Dylan (Up42) offered up their experiences integrating data from multiple providers.
- Joe (Hydrosat) and Marc (SkyFi) gave some great insights on what we need to do, emphasizing the role of the user over the data provider.
- Alex Herz – Orbit Logic
- Joe Reed – Hydrosat
- Scott Simmons – OGC
- Drew Botts – OpenSensorHub
- Derek Daczewitz – Black Sky
- Luis Veci – SkyWatch
- Paulo de Figueiredo Cruz – SkyWatch
- Marc Horowitz – SkyFi
- Dylan Bartels – Up42
- Eric Cote – Ursa
The initial stated goals of the sprint were to get to a minimal core set of parameters and agree on basic endpoints and responses. Clearly there is a spatial and temporal component to tasking, but what other parameters are needed? How do we provide pricing, handle contracts, update users on status, and deliver final products? How do we handle complex interdependent collections such as for mosaics, stereo pairs, or sequences of SAR images for interferometry?
One person asked if the goal was to develop an API to capture what exists now or to develop how we think it should be? It turned out the most interesting discussions were centered around tasking as a process, rather than the details of a transactional API with a data provider. Tasking is really about the negotiation, as Phil Varner (Element 84) put it: a user says “This is what I want” and the provider responds with “This is what I can offer”. The questions that arose were less about detail and more about how users should interact with the provider. How do users want to discover what is feasible? How do they evaluate multiple possible options and request one or more of those options?
One thing became clear as the day went on, “tasking” is a poor word choice. Users are not “tasking” satellites, that is what providers do based on a user request. Users order data, not unlike how they might order any other product from a data archive, except the data is fulfilled in the future and fulfillment is not guaranteed.
The sat-tasking-sprint repository on GitHub is the main hub for notes, implementations, and information on this effort. Some key issues arose from the discussion and while many more questions were raised than answers, some of the discussion was relatively straightforward and led to several general conclusions around how the API should operate:
- JSON should be used for requests and responses, using GeoJSON where appropriate
- Ideally, data is ultimately delivered with STAC metadata, therefore use STAC naming conventions and field names where there is alignment
- All parts of the API should be allowed to be asynchronous, including feasibility requests (as there may be a manual step involved)
- The provider should include unique user provided information attached to the request (such as an Order #)
- There should be an initial feasibility request that does not actually place an order, but provides details about how the request can be met. A later API call could place an order, and these two calls could be combined for automated systems
Push Choices to the Users
There was a general consensus that users start by making a “feasibility request. Included in the request is usually a spatial Area Of Interest (AOI) and a date/time range, Time of Interest (TOI), and possibly some additional parameters constraining the options. What is returned by the provider is a list of possible results that may vary by total area of coverage, time of acquisition, price, resolution, sun angle, or by virtually any collection parameter.
Rather than the provider trying to make a decision of what the user wants from the available options, this choice should be pushed back to the user. An API can try and encapsulate user preference (e.g., “get me the earliest collect possible”), however this requires making tradeoffs among different collect parameters that the user is in a better position to evaluate. The data provider won’t have all the information needed to know what the user prefers, and trying to encapsulate user preference in requests would quickly lead to a complex system that will still end up being a black box to the user.
For example, the picture below shows an AOI as a red outlined polygon. The green and purple regions indicate what the data provider can provide via tasking. The green region covers more of the AOI, but in this case has a high off-nadir viewing angle leading to a lower effective resolution. The purple region covers less of the region but with a much higher effective resolution. The user knows what their requirements are and will be able to choose between the two options, or select both if desired. The data provider has no information useful to making that decision on their behalf.
Instead, these options can be pushed to the user, where they can sort and With options in hand, the user can sort and evaluate the tradeoffs between different collection parameters. For situations where there is a single parameter of interest this could happen automatically by sorting and taking the first option (e.g., the cheapest, the earliest).
Push Complexity to the Provider
On the other hand, some collections may require complex parameters that may be somewhat meaningless to the user but are important for a s
On the other hand, some types of tasking may require complex parameters that are mostly meaningless to the user but important for successful data collection. A user who wants a stereo pair shouldn’t need to provide the relative displacement between images. That burden should be placed back onto providers. SAR image pairs for InSAR, large areas that require mosaicing, these are all examples that require multiple satellites or multiple tasks of a single satellite.
This led to the idea of data “products” (admittedly a poor overloaded term) that represent different types of data collection. A common product would be a single optical or thermal scene, while more complex products could represent stereo or InSAR data collection. Products could define their own set of queryables, e.g., cloud cover or view angle. In the STAC API Specification conformance classes are used for an API to advertise it’s capabilities. Products could be defined as common community extensions, while vendors could publish their own custom products.
We want to keep the momWe’ll be keeping the momentum by having monthly virtual meetings through the end of the year, with another in-person event in the new year. Please reach out to us if you would like to get involved. The sat-tasking-sprint GitHub repository is the main hub with sprint notes and existing implementations. Create new issues for discussion, make a Pull Request, or use the Gitter channel.