SkyServe Docs
  • Welcome to Skyserve SURGE
    • Features
    • Terminologies
  • Model Submission
    • User Modes
    • Submission Fields
    • Model Guidelines
    • Model Repository Guidelines
    • Appendix
  • Model Journey
    • Testing Flow
    • Model Deployment
    • Model Tasking
  • My Models
  • Model Tracker Page
  • Notification
  • Profile
  • Resources
Powered by GitBook
On this page
  • Model Name
  • Model Description
  • Visibility
  • Private
  • Public
  • Model Repository
  • Model Path
  • Auth Key
  • Model Config
  • Model Input
  • Input Product Type
  • Data Masks
  • Model Output
  • Image Type
  • Segmentation
  • Object Detection
  • Object Classification
  1. Model Submission

Submission Fields

Model Name

A unique name for your Model, ensuring it does not exceed 40 characters in length.

Model Description

A model description typically provides a brief overview and relevant details about a specific model. The model description aims to provide users with a clear understanding of what the model does and how it can be utilized effectively.

Visibility

Specify whether you want your models to have Private or Public visibility.

Private

Selecting this option means that your models will be visible and accessible only to you as the owner/developer who submitted them. They will not be visible on the marketplace. This is the default visibility of all models submitted on the portal. It is recommended not to switch out of Private unless the model has been exhaustively tested.

Public

Opting for this option makes your models visible on the marketplace. They will be able to run your models to obtain insights on the User Portal. This feature is not available currently. Developers will be notified when the Marketplace is live. Making your model Public does not expose your repository, models, or scripts to anyone, users on the Marketplace, or other developers.

Model Repository

The GitHub/Bitbucket URL of the repository containing the model as specified in the guidelines. Please note that only these two platforms are supported

Provide the repository URL where the model and its associated files are stored. A sample repository following the model submission guidelines can be found in the mission user handbook.

Please check model submission guidelines below to make sure that your repository structure and code adheres to the specification provided.

Model Path

For Deep Learning (DL) models, please specify the exact location within the model repository that points to the trained model (.onnx file). This field can point to the /Models folder in case of Non-DL models, since they won't have any pre-trained model.

Auth Key

An authentication key (Personal Access Token), is a unique alphanumeric code or token that serves as a secure credential for SkyServe to access the repository.

Model Config

If the model parameters are configured by a user (through the User Portal) or by the developer during testing, Model Configs can be used to define these. Any command-line arguments (argparse values) that can be passed to the model inference script would come under this. These arguments allow customization of the model's behavior by adjusting specific parameters. Each argument can be described using the following fields.

Parameters
Description

Argument-name

The name of the configurable parameter.

Datatype

The data type of the argument (int or float).

Min

The minimum allowed value for the argument.

Max

The maximum allowed value for the argument.

Default Value

The default value is assigned to the argument if no value is provided by the user.

Example : if the script accepts a '--threshold' argument as an integer, where the threshold parameter is a configurable integer value that sets a threshold for the model. Its minimum value, maximum value and the default value can be provided using the respective fields as described above.

The main inference script can be invoked using the following command: 'python main.py --threshold=8', where <threshold> is a configurable parameter having <int> datatype.

Model Input

The “Input Bands” section requires specifying the band order of images accepted by the model. This information is crucial for a custom script that will feed an image to the model during inference. If the model accepts PNG/JPEG image formats, it can only accommodate RGB bands in the same order. Ensure that the number of bands and band order selected is the same as that accepted by the model. The inference script must not have any pre-processing steps that change the band order. In short, an image as configured in this section should be accepted by the model without any further modifications.

Input Product Type

SkyServe can deliver different types of image products, each with a different level of processing applied. Please specify the desired level of processing applicable to your use case and the type of input required by the model to work. The image product types and level of processing applied shall differ from mission to mission. Kindly refer to the user handbook shared for your mission to understand the product types available.

Data Masks

Each product may have (a) clouds, (b) cloud shadows, or (c) Faulty pixels that are left incorrected. Data masks can be requested as input along with the image to be used either for pre-processing the image before inferencing or to post-process the result after inferencing. It is important to note that the corresponding datamask will be delivered as separate single TIFF file for each of the data mask requested. Therefore, please ensure that your model can handle the total number of bands specified as well as the data mask requested.

The following data masks can be used to exclude specific pixels from processing.

Data Mask
Description

Cloud Mask

Denotes presence (1) or absence (0) of cloud at a given pixel.

Cloud Shadow Mask

Denotes presence (1) or absence (0) of a cloud’s shadow at a given pixel.

No Data Mask

Denotes a faulty pixel that has not been corrected through interpolation. It is recommended to exclude such pixels (value = 1) from the final output, i.e. use only pixels with value = 0 in this mask.

If your model accepts and band along with <cloud_mask>, then please specify and as "Input Bands" and <cloud_mask> as "Data Mask"

Model Output

Image Type

Parameters
Description

Image type

Select image type as Integer or Float and define the values accordingly.

Integer

An image file with different bands, where each pixel value is represented as an integer.

Float

An image file with multiple bands, where each pixel value is represented as a floating-point number. Each band in the image represents a specific attribute, such as NDMI (Normalized Difference Moisture Index) or NDVI (Normalized Difference Vegetation Index), etc.

Values

Specify the range of pixel values in the image output (Min and Max).

Threshold Range

Define the range for a threshold parameter, including the minimum value, maximum value, and precision (default precision is set to medium).

Segmentation

Parameters
Description

Segmentation Type

Specify the type of segmentation output.

Binary Mask

A binary mask represented as a TIFF file. The mask consists of pixels with values of either 0 or 1, where 0 represents the background and 1 represents the segmented object.

Multi-class

A multi-class segmentation mask represented as a TIFF file. The mask contains multiple values, where each value represents a specific class or category. The number of different values in the mask corresponds to the number of classes.

Object Detection

Parameters

Point

The object detection results in the form of a JSON object. Each object in the “prediction” array includes information such as the prediction score, the x and y coordinates of the detected point.

Bounding Box

The object detection results in the form of a JSON object. Each object in the “prediction” array includes information such as the prediction score, the coordinates of the bounding box (xmin, ymin, xmax, ymax) that tightly encloses the detected object.

Object Classification

Parameters
Description

Point

The object classification results in the form of a JSON object. Each object in the "prediction" array includes information such as the class ID, label, prediction score, and the x and y coordinates of the classified point.

Bounding Box

The object classification results in the form of a JSON object. Each object in the "prediction" array includes information such as the class ID, label, prediction score, and the coordinates of the bounding box (xmin, ymin, xmax, ymax) that tightly encloses the classified object.

PreviousUser ModesNextModel Guidelines

Last updated 1 month ago

To generate an Auth Key for the GitHub Repo, please have a look at.

To generate an Auth Key for the Bitbucket repo, please have a look at.

Please choose the appropriate model output type based on the model’s inference from the following categories. For detailed information on each of these output types, along with sample examples, please refer to.

here
here
the Appendix