Navigation | On-Premise | 2GIS Documentation
On-Premise

On-Premise Navigation services

Navigation services allow building routes and getting information about travel time and distance between points on the map, with or without consideration of current traffic conditions.

This article describes how to deploy and configure the Navigation services in your environment. To learn how to use the RESTful API provided by the Navigation services, see the documentation for the individual APIs (found in the top menu under "Navigation").

On-Premise Navigation services architecture

Navigation services comprise the following services:

  • Navi-Castle - imports data from S3 storage and serves it to Navi-Back in a consumable format.
  • Navi-Front - receives requests from applications and forwards them to Navi-Router and Navi-Back.
  • Navi-Router - verifies the request using the API Keys service and determines the appropriate Navi-Back service to process the request using regions and rules system (see below).
  • Navi-Back - processes the request.

A Traffic Proxy service is used by Navi-Back to get real-time traffic data from 2GIS Traffic Servers. Navi-Back service uses this data to build routes that consider traffic conditions.

Navigation services employ a scalable architecture that allows easy distribution of incoming requests among several Navi-Back instances:

  1. Navi-Front automatically discovers deployed Navi-Router and Navi-Back instances by checking services' labels in the Kubernetes namespace Navi-Front resides.

  2. There can be several Navi-Back instances, each serving a dedicated part of requests. Consequently, these instances fetch only the required data sets from the Navi-Castle services.

    Each Navi-Back has one or several map regions it can process called a "rule". This behavior is configured by the rules files. This allows to distribute the workload and to plan the computational resources according to these files. For example, a small Navi-Back instance can process a moderate amount of requests for a certain small region, and a more performant Navi-Back instance can process a large amount of requests for a bigger region.

  3. When Navi-Front receives an incoming request:

    1. It forwards the request to the Navi-Router service.

    2. Navi-Router uses the same rules files as Navi-Back, and collects all the required data from Navi-Castle. Using the rules file and the collected data, Navi-Router finds a rule under which the request falls. In other words, Navi-Router determines if there is a Navi-Back that can process the request.

      If the request is successfully validated in the API Keys service and a suitable rule exists, then Navi-Router sends the name of the rule to Navi-Front.

    3. Navi-Front finds a suitable Navi-Back instance that is configured to work according to the received rule, and forwards the request to this instance.

    4. The Navi-Back instance processes the request and returns a response to Navi-Front.

    5. Navi-Front sends the response back to the application.

Navigation services can be deployed in two different configurations:

  1. Using all four services. This is the recommended deployment method that ensures security, scalability, and reliability.

  2. Using only Navi-Castle and Navi-Back. In this case, all requests are processed directly by Navi-Back, and the request verification and routing steps are skipped. We recommend this configuration for testing purposes only.

    Note:

    Without Navi-Router, Navi-Back is able to process only the requests that fall under the single configured Navi-Back's rule set. In a distributed deployment, Navi-Front and Navi-Router services are required for On-Premise Navigation services to operate.

Detailed requirements for each service are listed in the Overview document. Additional information can be found in the Deployment considerations section of this document.

Shared infrastructure:

  • Support for Kubernetes Persistent Volume and dynamic Persistent Volume Claim for storing data (optional requirement).

    Important note:

    It is highly recommended to configure Persistent Volume and Persistent Volume Claim storage features in your Kubernetes cluster.

    If no persistent volume is provided to Navi-Castle, then the data will be stored on an emptyDir volume, which means the data will be lost in case of the Navi-Castle pod being removed from the Kubernetes cluster node.

Services:

  • Traffic Proxy service configured to use Traffic Update servers that provide the data in format that is suitable for navigation services.
  • Navi-Castle

Services:

  • Navi-Castle

Services:

  • Navi-Castle
  • Navi-Router
  • Navi-Back

Navi-Back uses rules file to specify the type of requests it can serve. This allows a Navi-Back instance to fetch and store a limited set of data from Navi-Castle that is sufficient to serve the specified type of requests.

Rules file is also used by the Navi-Router service to determine which of the several Navi-Back instances can process a request.

The rules file has the following structure:

[
  {
    "name": "<rule_name>",
    "router_projects": [
        "<name of the project on a Navi-Router>"
    ],
    "moses_projects": [
        "<name of the project on a Navi-Backend>"
    ],
    "projects": [
        "<region name>"
    ],
    "queries": [
        <array of request types that are allowed to be processed>
    ],
    "routing": [
        <available routing types for Routing requests>
    ]
  }
]

Do the following:

  1. Do the common deployment steps.

    Note:

    Do not forget to write down the path to a manifest file, it will be required to deploy the services.

  2. Deploy Traffic Proxy service, if it is not deployed yet. See the Requirements section for details.

  3. Deploy Navi-Castle service.

  4. Deploy Navi-Back service.

  5. Deploy Navi-Router service.

  6. Deploy Navi-Front service.

  1. Create the values-castle.yaml configuration file:

    values-castle.yaml

    dgctlDockerRegistry: <Docker Registry hostname and port>/2gis-on-premise
    
    dgctlStorage:
        host: <Deployment Artifacts Storage endpoint>
        bucket: <Deployment Artifacts Storage bucket>
        accessKey: <The bucket access key>
        secretKey: <The bucket secret key>
        manifest: <Path to the manifest file>
    
    resources:
        limits:
            cpu: 1000m
            memory: 512Mi
        requests:
            cpu: 500m
            memory: 128Mi
    
    persistentVolume:
        enabled: false
        accessModes: <volume access mode>
        storageClass: <volume storage class>
        size: <volume size>
    
    castle:
        castle_data_path: '/opt/castle/data/'
    
    cron:
        enabled: <true or false>
        schedule: <schedule string>
        concurrencyPolicy: <concurrency policy>
        successfulJobsHistoryLimit: <history depth>
    
    replicaCount: <number of the Castle service replicas>
    

    Where:

    1. dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.

    2. dgctlStorage: Deployment Artifacts Storage settings.

      1. Fill in the common settings to access the storage: endpoint, bucket, and access credentials.
      2. manifest: fill in the path to the manifest file in the manifests/1640661259.json format. This file contains the description of pieces of data that the service requires to operate.
    3. resources: computational resources settings for service. See the minimal requirements table for the actual information about recommended values.

    4. persistentVolume: settings of Kubernetes Persistent Volume Claim (PVC) that is used to store the service data.

      1. enabled: flag that controls whether PVC is enabled (default: false). If PVC is disabled, a service's replica can lose its data.
      2. accessModes: access mode for the PVC (default: none). Available modes are the same as for persistent volumes.
      3. storageClass: storage class for the PVC.
      4. size: storage size.

      Important note:

      Navi-Castle is deployed using StatefulSet. This means that every Navi-Castle replica will get its own dedicated Persistent Storage with the specified settings.

      For example, if you configure the size setting as 5Gi, then the total storage volume required for 3 replicas will be equal to 15Gi.

    5. castle.castle_data_path: path to the Navi-Castle data directory.

    6. cron: the Kubernetes Importer cronjob settings. These setting are the same for all deployed Navi-Castle service's replicas. This job fetches actual data from Deployment Artifacts Storage and updates the data on the Navi-Castle replica.

      1. enabled: flag that controls whether the job is enabled (default: false). If the job is disabled, no Navi-Castle replicas will get data updates.
      2. schedule: schedule of the job in cron format. For example, */10 * * * *.
      3. concurrencyPolicy: the job concurrency policy.
      4. successfulJobsHistoryLimit: a limit on how many completed jobs should be kept.
    7. replicaCount: number of the Navi-Castle service replicas. Note that each replica's pod will get its own dedicated cron job to fetch the actual data from Deployment Artifacts Storage.

  2. Deploy the service with Helm using the created values-castle.yaml configuration file.

    helm upgrade --install --version=1.0.3 --atomic --values ./values-castle.yaml navi-castle 2gis-on-premise/navi-castle
    

    On its first start, a Navi-Castle replica will fetch the data from Deployment Artifacts Storage. After that, the data will be updated on schedule by the Cron Job.

  1. Create the rules.conf file with the required set of rules.

  2. Create the values-back.yaml configuration file:

    values-back.yaml

    dgctlDockerRegistry: <Docker Registry hostname and port>/2gis-on-premise
    
    affinity: <affinity rules>
    
    autoscaling:
        enabled: <true or false>
        maxReplicas: <max replicas number>
        minReplicas: <min replicas number>
        scaleDownWindowsSeconds: <scale-down window>
        scaleUpWindowSeconds: <scale-up window>
        targetCPUUtilizationPercentage: <target CPU utilization>
    
    naviback:
        app_castle_host: <URL of Navi-Castle service>
        eca_host: <Domain name of the Traffic Proxy service>
        forecast_host: <URL of Traffic forecast service>
        rules_filename: <rules file name>
        app_rule: <rule name from the rules file to apply>
        type: <routing type: taxi or carrouting>
    
    replicaCount: <number of the Navi-Back service replicas>
    
    resources:
        limits:
            cpu: 2000m
            memory: 16000Mi
        requests:
            cpu: 1000m
            memory: 1024Mi
    

    Where:

    1. dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.

    2. affinity: node affinity settings.

    3. autoscaling: autoscaling settings.

    4. naviback: the Navi-Back service settings.

      1. app_castle_host: URL of Navi-Castle service. This URL should be accessible from all the pods within your Kubernetes cluster.
      2. eca_host: domain name of the Traffic Proxy service. This URL should be accessible from all the pods within your Kubernetes cluster.
      3. forecast_host: URL of Traffic forecast service. This URL should be accessible from all the pods within your Kubernetes cluster.
      4. rules_filename: rules file name. Use the name of the file you have created in the previous step.
      5. app_rule: rule name from the rules file to apply.
      6. type: which routing type this Navi-Back deployment will do: taxi or carrouting.
    5. replicaCount: number of the Navi-Back service replicas.

    6. resources: computational resources settings for service. See the minimal requirements table for the actual information about recommended values.

  3. Deploy the service with Helm using the created values-back.yaml configuration file.

    helm upgrade --install --version=1.0.3 --atomic --values ./values-back.yaml navi-back 2gis-on-premise/navi-back
    
  1. Create the rules.conf file with the required set of rules.

  2. Create the values-router.yaml configuration file:

    values-router.yaml

    dgctlDockerRegistry: <Docker Registry hostname and port>/2gis-on-premise
    
    router:
        app_castle_host: http://navi-castle.host
        additional_sections: |-
            "key_management_service" :
            {
              "service_remote_address" : "http://keys-api.host",
              "service_apis" :
              [
                  {"type" : "directions", "token" : "DIRECTIONS_API_KEY"},
                  {"type" : "distance-matrix", "token" : "DISTANCE_MATRIX_API_KEY"},
                  {"type" : "pairs-directions", "token" : "PAIRS_DIRECTIONS_API_KEY"},
                  {"type" : "truck-directions", "token" : "TRUCK_DIRECTIONS_API_KEY"},
                  {"type" : "public-transport", "token" : "PUBLIC_TRANSPORT_API_KEY"},
                  {"type" : "isochrone", "token" : "ISOCHRONE_API_KEY"},
                  {"type" : "map-matching", "token" : "MAP_MATCHING_API_KEY"}
              ]
            }
    replicaCount: 2
    resources:
        limits:
            cpu: '2000m'
            memory: '1024Mi'
        requests:
            cpu: '500m'
            memory": '128Mi'
    

    Where:

    1. dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.

    2. app_castle_host - URL of the Navi-Castle service. This URL should be accessible from all the pods within your Kubernetes cluster.

    3. key_management_service - API Keys settings. If this parameter is omitted, the API key verification step will be skipped.

      1. service_remote_address: URL of the API Keys service. This URL should be accessible from all the pods within your Kubernetes cluster.
      2. service_apis: keys for individual APIs that were set in API Keys Admin.
    4. replicaCount: number of service replicas.

    5. resources: computational resources settings for the service. See the minimal requirements table for the actual information about recommended values.

  3. Deploy the service with Helm using the created values-router.yaml configuration file.

    helm upgrade --install --version=1.0.3 --atomic --values ./values-router.yaml navi-router 2gis-on-premise/navi-router
    
  1. Create the values-front.yaml configuration file:

    values-front.yaml

    dgctlDockerRegistry: <Docker Registry hostname and port>/2gis-on-premise
    
    affinity: <affinity rules>
    autoscaling:
        enabled: 'true'
        maxReplicas: 6
        minReplicas: 2
        scaleDownWindowsSeconds: 600
        scaleUpWindowSeconds: 300
        targetCPUUtilizationPercentage: 90
    replicaCount: 2
    resources:
        limits:
            cpu: 100m
            memory: 128Mi
        requests:
            cpu: 100m
            memory: 128Mi
    

    Where:

    1. dgctlDockerRegistry: your Docker Registry endpoint where On-Premise services' images reside.
    2. affinity: node affinity settings.
    3. autoscaling: autoscaling settings.
    4. replicaCount: number of service replicas.
    5. resources: computational resources settings for the service. See the minimal requirements table for the actual information about recommended values.
  2. Deploy the service with Helm using the created values-front.yaml configuration file.

    helm upgrade --install --version=1.0.3 --atomic --values ./values-front.yaml navi-front 2gis-on-premise/navi-front
    

To update the Navi-Castle service, execute the following command:

helm upgrade --version=1.0.3 --atomic --values ./values-castle.yaml navi-castle 2gis-on-premise/navi-castle

To update the Navi-Back service, execute the following command:

helm upgrade --version=1.0.3 --atomic --values ./values-back.yaml navi-back 2gis-on-premise/navi-back

To update the Navi-Router service, execute the following command:

helm upgrade --version=1.0.3 --atomic --values ./values-router.yaml navi-router 2gis-on-premise/navi-router

To update the Navi-Front service, execute the following command:

helm upgrade --version=1.0.3 --atomic --values ./values-front.yaml navi-front 2gis-on-premise/navi-front

To test that the Navi-Castle service is working, you can do the following:

  1. Port forward the service using kubectl:

    kubectl port-forward navi-castle-0 7777:8080
    
  2. Send a GET request to the root endpoint using cURL or a similar tool:

    curl -Lv http://NAVI_CASTLE_HOST:7777/
    

    You should receive an HTML listing of all files and folders similar to the following:

    <html>
        <head>
            <title>Index of /</title>
        </head>
        <body>
            <h1>Index of /</h1>
            <hr />
            <pre>
                <a href="../">../</a>
                <a href="lost%2Bfound/">lost+found/</a>09-Mar-2022 13:33                   -
                <a href="packages/">packages/</a>09-Mar-2022 13:33                   -
                <a href="index.json">index.json</a>09-Mar-2022 13:33                 634
                <a href="index.json.zip">index.json.zip</a>09-Mar-2022 13:33                 357
            </pre>
            <hr />
        </body>
    </html>
    

To test that the Navi-Back service is working, you can do the following:

  1. Port forward the service using kubectl:

    kubectl port-forward navi-back-6864944c7-vrpns 7777:8080
    
  2. Create the following file containing the body of the request:

    data.json

    {
        "locale": "en",
        "points": [
            {
                "type": "walking",
                "x": 50.061144,
                "y": 26.409866
            },
            {
                "type": "walking",
                "x": 50.044684,
                "y": 26.377784
            }
        ],
        "type": "jam"
    }
    
  3. Send the request using cURL or a similar tool:

    curl -Lv http://NAVI_BACK_HOST:7777/carrouting/6.0.0/global -d @data.json
    

    You should receive a response with the following structure:

    {
      "query": {..},
      "result": [{..}, {..}]
      "type": "result"
    }
    

    See the Navigation documentation for request examples.

To test that the Navi-Router service is working, you can do the following:

  1. Port forward the service using kubectl:

    kubectl port-forward navi-router-6864944c7-vrpns 7777:8080
    
  2. Create the following file containing the body of the request:

    data.json

    {
        "locale": "en",
        "points": [
            {
                "type": "walking",
                "x": 50.061144,
                "y": 26.409866
            },
            {
                "type": "walking",
                "x": 50.044684,
                "y": 26.377784
            }
        ],
        "type": "jam"
    }
    
  3. Send the request using cURL or a similar tool:

    curl -Lv http://NAVI_ROUTER_HOST:7777/carrouting/6.0.0/global -d @data.json
    

    You should receive a response containing the rule name:

    dammam_cr
    

To test that the Navi-Front service is working, you can do the following:

  1. Port forward the service using kubectl:

    kubectl port-forward navi-front-6864944c7-vrpns 7777:8080
    
  2. Create the following file containing the body of the request:

    data.json

    {
        "locale": "en",
        "points": [
            {
                "type": "walking",
                "x": 50.061144,
                "y": 26.409866
            },
            {
                "type": "walking",
                "x": 50.044684,
                "y": 26.377784
            }
        ],
        "type": "online5"
    }
    
  3. Send the request using cURL or a similar tool:

    curl -Lv http://NAVI_FRONT_HOST:7777/carrouting/6.0.0/global -d @data.json
    

    You should receive a response with the following structure:

    {
      "query": {..},
      "result": [{..}, {..}]
      "type": "result"
    }