Getting Started with DIAS-KUKSA

_images/dias.png
_images/kuksa.png

Contents

Introduction

DIAS (DIagnostic Anti-tampering Systems)

Modern vehicles with internal combustion engines are equipped with exhaust aftertreatment systems that drastically reduce the emission of harmful exhaust gases. However, there are companies that offer facilities and services to disable these exhaust aftertreatment systems. In a joint European research and development project, DIAS, we will help prevent or uncover these manipulations.

Eclipse KUKSA

_images/appstacle-kuksa.PNG
  • KUKSA is a code-based result of the internationally funded project, APPSTACLE (2017 - 2019).
_images/kuksa_ecosystem.png
  • An open-source eco-system for the connected vehicle domain.
  • It is introduced to establish a standard for car-to-cloud scenarios.
  • It improves comprehensive domain-related development activities.
  • It opens the market to external applications and service provider.
  • It facilitates the use of open-source software wherever possible without compromising security.
  • The initial release (0.1.0): 30.09.2019 / The second release (0.2.0): 01.2020
  • Implementing the DIAS use-case with KUKSA benefits both parties by enabling the solution to be compliant with any vehicles regardless of OEM-specific standards.

DIAS-KUKSA

One objective of DIAS is to create a cloud-based diagnostic system. To manage a large scale of vehicles, sufficient computing power and resource are required. A cloud-based system would not only provide these but also make the entire system easy to scale according to the number of target vehicles by utilizing cloud service providers such as Azure, AWS and Bosch IoT Hub. For the system to be powered by service providers like these, it is essential to establish connectivity between the vehicle-server-based applications and the external-server-based applications. The KUKSA infrastructure offers the means for instituting such connectivity.

The goal of this documentation is to make clear how to set up each infrastructure component according to the case of DIAS in a sequential manner so that readers can have a thorough understanding of how to apply their own implementation on the established connectivity with KUKSA.

DIAS-KUKSA Overall Schema
_images/overall_schema.png

The figure illustrates the entire connectivity cycle from the vehicle to the end-consumer. In the following chapters, how to establish such connectivity is described in detail.

Step 1: Hardware Setup

Raspberry-Pi (Data Publisher)

For development, you will be using Raspberry-Pi 3 or 4 (preferably 4 since it is faster and has more RAM capacity). Raspberry-Pi is not a regular micro-controller but rather a single-board computer. This means that you can run an OS (Operating System; Raspbian, Ubuntu, etc.) on it, and connect it to other IO devices such as monitor, mouse and keyboard. This way, you can use your Raspberry-Pi in the similar way you use your PC, which eases the entire in-vehicle development process.

  • (Hardware Option 1 - Raspberry-Pi) In this documentation, the following hardware and OS are used.
    • HW: Raspberry-Pi 4
    • OS: Raspberry-Pi OS (32-bit) with desktop / Download
  1. Set up Raspberry-Pi. You can kick-start with your Raspberry-Pi by following this instruction.

  2. When installation is done, open a terminal and install Git on your Raspberry-Pi:

    $ sudo apt update
    $ sudo apt install git
    $ git --version
    
  • (Hardware Option 2 - Ubuntu VM) If you are only interested in desk development without connecting to the real CAN, you can use a virtual machine as your in-vehicle “hardware”.
    • Set up an Ubuntu virtual machine. A detailed tutorial to how to set up Ubuntu with VirtualBox is explained here.
    • The image file used (Ubuntu 18.04 LTS - Bionic Beaver) that is compatiable to this documentation can be downloaded here.
CAN Interface for Hardware

For your hardware to be interactive with CAN, a CAN interface is required. Since Raspberry-Pi doesn’t have a built-in CAN interface, user has to configure it manually. There are several ways for configuring the interface on Raspberry-Pi and only three options with different purposes are introduced here.

CAN Interface Option 1 - Virtual CAN (Logfile Simulation Purpose)
  • A virtual CAN interface emulates a physical CAN interface and is capable of behaving nearly identically with less limitations. A virtual CAN interface is appropriate when user just wants to play a CAN log for testing applications in the development phase.
  1. Open a terminal and command:

    $ sudo modprobe vcan
    $ sudo ip link add dev vcan0 type vcan
    $ sudo ip link set up vcan0
    
  2. Install net-tools to use ifconfig:

    $ sudo apt install net-tools
    
  3. Now you should be able to see the interface, vcan0, when commanding:

    $ ifconfig
    
CAN Interface Option 2 - SKPang PiCan2 (Only for Raspberry-Pi)
_images/pican2.jpg
  • SKPang PiCan2 is a shield that provides a physical CAN interface between Raspberry-Pi and the actual CAN bus. A physical CAN interface is required for a field test.
  1. For your Raspberry-Pi to recognize the connected PiCan2, you need to go through a setup process. After physically connecting a PiCan2 shield to your Raspberry-Pi, follow the “Software Installation (p.6)” part of the instruction from Raspberry-Pi.

  2. When installation is done, open a terminal and confirm whether the can0 interface is present by commanding:

    $ ifconfig -a
    
  3. If can0 is shown, configure and bring the interface up by commanding:

    $ sudo ip link set can0 up type can bitrate 500000
    
  • bitrate shall be set as the same as the CAN baudrate of the target vehicle.
  1. Now you should be able to see the interface, can0, when commanding:

    $ ifconfig
    
  2. If you want to bring the interface down, command the following:

    $ sudo ip link set can0 down
    
CAN Interface Option 3 - Seeed 2-Channel Shield (Only for Raspberry-Pi)
_images/seed_2_channel.png
  • Seeed 2-Channel CAN-BUS(FD) Shield serves the same purpose as SKPang PiCan2 does but with two different CAN interfaces. Because a lot of vehicles use more than one CAN channel, it is required to use a dual-channel shield when data from two different CAN channels need to be analyzed in real-time.
  • A detailed setup description can be found here.
  1. Get the CAN-HAT source code and install all linux kernel drivers:

    $ git clone https://github.com/seeed-Studio/pi-hats
    $ cd pi-hats/CAN-HAT
    $ sudo ./install.sh
    $ sudo reboot
    
  2. After the reboot, confirm if can0 and can1 interfaces are successfully initialized by commanding:

    $ dmesg | grep spi
    
  3. You should be able to see output like the following:

    [ 3.725586] mcp25xxfd spi0.0 can0: MCP2517 successfully initialized.
    [ 3.757376] mcp25xxfd spi1.0 can1: MCP2517 successfully initialized.
    
  4. Open a terminal and double-check whether the can0 and can1 interfaces are present by commanding:

    $ ifconfig -a
    

5-A. (CAN Classic) If can0 and can1 are shown, configure and bring the interfaces up by commanding:

$ sudo ip link set can0 up type can bitrate 1000000 restart-ms 1000 fd off
$ sudo ip link set can1 up type can bitrate 1000000 restart-ms 1000 fd off
  • bitrate shall be set as the same as the CAN baudrate of the target vehicle.

5-B. (CAN FD) If can0 and can1 are shown, configure and bring the interface up by commanding:

$ sudo ip link set can0 up type can bitrate 1000000 dbitrate 2000000 restart-ms 1000 fd on
$ sudo ip link set can1 up type can bitrate 1000000 dbitrate 2000000 restart-ms 1000 fd on
  • bitrate shall be set as the same as the CAN baudrate of the target vehicle.
  1. If you want to bring the interface down, command the following:

    $ sudo ip link set can0 down
    $ sudo ip link set can1 down
    

Linux Machine (Data Consumer)

  • A data consumer machine is intended to use the data produced by the connected vehicle’s Raspberry-Pi. For development, you can use a virtual machine on your PC that is later expected to be replaceable with a VM instance from cloud service providers to ensure scalability. Please note that it is not required to use virtual machine if the default OS is already Ubuntu.
  1. Set up an Ubuntu virtual machine. A detailed tutorial to how to set up Ubuntu with VirtualBox is explained here.

    • The image file used (Ubuntu 18.04 LTS - Bionic Beaver) for this documentation can be downloaded here.
  2. Open a terminal and install Git on Ubuntu:

    $ sudo apt update
    $ sudo apt install git
    $ git --version
    

Step 2: In-vehicle Setup

_images/invehicle_schema.png

can-utils

  • can-utils is a Linux specific set of utilities that enables Linux to communicate with the CAN network on the vehicle. The basic tutorial can be found here.
  1. Open a terminal and install can-utils:

    $ sudo apt install can-utils
    
  2. To test can-utils, command the following in the same terminal:

    $ candump vcan0
    
  • candump allows you to print all data that is being received by a CAN interface, vcan0, on the terminal.
  1. Open another terminal and command the following:

    $ cansend vcan0 7DF#DEADBEEF
    
  • cansend sends a CAN message, 7DF#DEADBEEF to the corresponding CAN interface, vcan0.
  1. Confirm whether candump has received the CAN message. You should be able to see output like the following on the previous terminal:

    vcan0   7DF    [4]   DE AD BE EF
    

kuksa.val Infrastructure

  1. Install Git:

    $ sudo apt install git
    
  2. Recursively clone the kuksa.val repository:

    $ git clone --recursive https://github.com/eclipse/kuksa.val.git
    
  3. Make a folder named build inside the kuksa.val repository folder and navigate to kuksa.val/build/:

    $ cd kuksa.val
    $ mkdir build
    $ cd build
    
  4. The following commands should be run before cmake to avoid possible errors.

4-1. Install cmake (version 3.12 or higher) if it hasn’t been installed:

$ sudo apt-get update && sudo apt-get upgrade
  1. Raspberry-Pi:

    $ sudo apt install cmake
    
  2. Ubuntu:

    $ sudo snap install cmake --classic
    

4-2. Install dependencies (Boost libraries, OpenSSL, Mosquitto and more):

$ sudo apt-get install libblkid-dev e2fslibs-dev libboost-all-dev libaudit-dev libssl-dev mosquitto libmosquitto-dev libglib2.0-dev
  1. You can cmake now. Navigate to kuksa.val/build/ and command the following:

    $ cmake ..
    
  2. Then command make in the same directory:

    $ make
    

If succeeded, you have successfully built the kuksa.val infrastructure.

kuksa.val - kuksa.val VSS Server Setup
_images/invehicle_schema_server.png
  1. The kuksa.val server is built based on the Genivi VSS (Vehicle Signal Specification) data structure model. The VSS data structure is created according to the JSON file that is put into the kuksa-val-server executable as an arugment under --vss (e.g., vss_rel_2.0.json). Before we bring up and run the kuksa.val server, we can create our own VSS data structure in the following steps.

1-1. Recursively clone the GENIVI/vehicle_signal_specification repository:

$ git clone --recurse-submodules https://github.com/GENIVI/vehicle_signal_specification.git

1-2. The name of the cloned repository folder is vehicle_signal_specification. Inside there is a Makefile that creates the VSS data structure according to vehicle_signal_specification/spec. Since we only need a JSON file as output, we can modify Makefile as follow:

#
# Makefile to generate specifications
#

.PHONY: clean all json

all: clean json

DESTDIR?=/usr/local
TOOLSDIR?=./vss-tools
DEPLOYDIR?=./docs-gen/static/releases/nightly


json:
    ${TOOLSDIR}/vspec2json.py -i:spec/VehicleSignalSpecification.id -I ./spec ./spec/VehicleSignalSpecification.vspec vss_rel_$$(cat VERSION).json

clean:
    rm -f vss_rel_$$(cat VERSION).json
    (cd ${TOOLSDIR}/vspec2c/; make clean)

install:
    git submodule init
    git submodule update
    (cd ${TOOLSDIR}/; python3 setup.py install --install-scripts=${DESTDIR}/bin)
    $(MAKE) DESTDIR=${DESTDIR} -C ${TOOLSDIR}/vspec2c install
    install -d ${DESTDIR}/share/vss
    (cd spec; cp -r * ${DESTDIR}/share/vss)

deploy:
    if [ -d $(DEPLOYDIR) ]; then \
        rm -f ${DEPLOYDIR}/vss_rel_*;\
    else \
        mkdir -p ${DEPLOYDIR}; \
    fi;
        cp  vss_rel_* ${DEPLOYDIR}/
  • Please note that it is recommended to modify the file manually since Makefile is tab-sensitive.

1-3. Now we can replace the vehicle_signal_specification/spec folder with the modified folder. To get the modified spec folder, clone the junh-ki/dias_kuksa repository:

$ git clone https://github.com/junh-ki/dias_kuksa.git

1-4. In the directory, dias_kuksa/utils/in-vehicle/vss_structure_example/, the spec folder can be found. Replace the existing spec folder in vehicle_signal_specification/ with the one from dias_kuksa/utils/in-vehicle/vss_structure_example/. Designing the spec folder’s file structure can be easily self-explained. The following figure illustrates what the GENIVI data structure would look like when created with the spec folder.

_images/dias_GENIVI_structure_.png
  • By modifying the structure of spec folder, a user-specific GENIVI data structure can be created that can be fed onto kuksa-val-server.

1-5. Before commanding make, install python dependencies (anytree, deprecation, stringcase) first:

$ sudo apt install python3-pip
$ pip3 install anytree deprecation stringcase pyyaml

1-6. Navigate to the directory, vehicle_signal_specification/, and command make to create a new JSON file:

$ make

1-7. As a result, you can get a JSON file named as vss_rel_2.0.0-alpha+006.json. Let’s rename this file as modified.json for convenience and move it to kuksa.val/build/src/, where the kuksa-val-server executable file is located.

  1. Now we can bring up and run the kuksa.val server with modified.json. Navigate to the directory, kuksa.val/build/src/, and command the following:

    $ ./kuksa-val-server --vss modified.json --insecure --log-level ALL
    
  • The kuksa.val server is entirely passive. Which means that you would need supplementary applications to feed and fetch the data. dbcfeeder.py and cloudfeeder.py are introduced in the following contents. They are meant to deal with setting and getting the data from the kuksa.val server.
kuksa.val - dbcfeeder.py Setup
_images/invehicle_schema_dbcfeeder.png

kuksa.val/examples/dbc2val/dbcfeeder.py is to interpret and write the CAN data that is being received by the CAN interface (e.g., can0 or vcan0) to the kuksa.val server.

  • dbcfeeder.py takes four compulsory arguments to be run:
    • CAN interface (e.g., can0 or vcan0) / -d or --device / To connect to the CAN device interface.
    • JSON token (e.g., super-admin.json.token) / -j or --jwt / To have write-access to the server.
    • DBC file (e.g., dbcfile.dbc) / --dbc / To translate the raw CAN data.
    • Mapping YML file (e.g., mapping.yml) / --mapping / To map each of the specific signals to the corresponding path in the kuksa.val server.
  • Since the kuksa.val work package has the admin JSON token already, you only need a DBC file and a YML file. The junh-ki/dias_kuksa repository provides the example DBC file and YML file. (DBC file is target-vehicle-specific and can be offered by the target vehicle’s manufacturer.)
  1. Assuming you have already cloned the junh-ki/dias_kuksa repository, / If you haven’t, please clone it now:

    $ git clone https://github.com/junh-ki/dias_kuksa.git
    
  2. Navigate to the directory, dias_kuksa/utils/in-vehicle/dbcfeeder_example_arguments/, and copy dias_mapping.yml and dias_simple.dbc (omitted due to the copyright issue and thus shared on request) to kuksa.val/clients/feeder/dbc2val/ where dbcfeeder.py is located.

  3. Before running dbcfeeder.py, install python dependencies (python-can cantools serial) first:

    $ pip3 install python-can cantools serial websockets
    
  4. If you haven’t brought up a virtual CAN interface, vcan0, please do it now by following CAN Interface Option 1 - Virtual CAN (Logfile Simulation Purpose).

  5. Navigate to kuksa.val/clients/feeder/dbc2val/, and command the following:

    $ python3 dbcfeeder.py -d vcan0 -j ../../../certificates/jwt/super-admin.json.token --dbc dias_simple.dbc --mapping dias_mapping.yml
    
  6. (Optional) If your DBC file follows J1939 standard, please follow Running dbcfeeder.py with j1939reader.py to run dbcfeeder.py with J1939.

kuksa.val - cloudfeeder.py Setup
_images/invehicle_schema_cloudfeeder.png
  • dias_kuksa/utils/in-vehicle/cloudfeeder_telemetry/cloudfeeder.py fetches the data from the kuksa.val in-vehicle server and preprocesses it with a user-specific preprocessor, dias_kuksa/utils/in-vehicle/cloudfeeder_telemetry/preprocessor_bosch.py, and transmits the result to Hono (kuksa.cloud - Eclipse Hono (Cloud Entry)) in a form of JSON Dictionary.
  • cloudfeeder.py takes six compulsory arguments to be run:
    • JSON token (e.g., super-admin.json.token) / -j or --jwt / To have write-access to the server.
      • Host URL (e.g., “mqtt.bosch-iot-hub.com”) / --host
      • Protocol Port Number (e.g., “8883”) / -p or --port
      • Credential Authorization Username (Configured when creating) (e.g., “{username}@{tenant-id}”) / -u or --username
      • Credential Authorization Password (Configured when creating) (e,g., “your_pw”)/ -P or --password
      • Server Certificate File (MQTT TLS Encryption) (e.g., “iothub.crt”) / -c or --cafile
      • Data Type (e.g., “telemetry” or “event”) / “-t” or “–type”
  1. (Optional) preprocessor_bosch.py is designed to follow Bosch’s diagnostic methodologies. Therefore you can create your own preprocessor_xxx.py or modify preprocessor_example.py that replaces preprocessor_bosch.py to follow your own purpose. Of course, the corresponding lines in cloudfeeder.py should be modified as well in this case.
  2. Navigate to dias_kuksa/utils/in-vehicle/cloudfeeder_telemetry/, copy cloudfeeder.py and preprocessor_example.py to kuksa.val/clients/vss-testclient/, where the testclient.py file is located.
  3. Then the do_getValue(self, args) function from kuksa.val/clients/vss-testclient/testclient.py should be modified as below.
    ...

    def do_getValue(self, args)::
    """Get the value of a parameter"""
    req = {}
    req["requestId"] = 1234
    req["action"]= "get"
    req["path"] = args.Parameter
    jsonDump = json.dumps(req)
    self.sendMsgQueue.put(jsonDump)
    resp = self.recvMsgQueue.get()
    # print(highlight(resp, lexers.JsonLexer(), formatters.TerminalFormatter()))
    self.pathCompletionItems = []
    datastore = json.loads(resp)
    return datastore

...
  1. Due to its dependency on the cloud instance information, you should create either a Eclipse Hono or a Bosch-IoT-Hub instance first by following kuksa.cloud - Eclipse Hono (Cloud Entry), so that you can get the required information for running cloudfeeder.py ready.

  2. Download the server certificate here and place it to kuksa.val/clients/vss-testclient/, where the cloudfeeder.py file is located.

  3. Before running cloudfeeder.py, install dependencies (mosquitto, mosquitto-clients, from apt and pygments, cmd2 from pip3) first:

    $ sudo apt-get update
    $ sudo apt-get install mosquitto mosquitto-clients
    $ pip3 install pygments cmd2
    
  4. When all the required information is ready, navigate to kuksa.val/clients/vss-testclient/, and run cloudfeeder.py by commanding:

    $ python3 cloudfeeder.py -j {admin_json_token} --host {host_url} -p {port_number} -u {auth-id}@{tenant-id} -P {password} -c {server_certificate_file} -t {transmission_type}
    
  • Just a reminder, the information between {} should be different depending on the target Hono instance. You can follow kuksa.cloud - Eclipse Hono (Cloud Entry) to create a Hono instance.
  • admin_json_token can be found under the directory (kuksa.val/certificates/jwt/super-admin.json.token). Therefore, /../certificates/jwt/super-admin.json.token should be entered for -j when considering the current directory is kuksa.val/clients/vss-testclient/.
  • If you have successuly made it here, you would be able to see cloudfeeder.py fetching and transmitting the data every 1~2 seconds by now.

DIAS Extension: SAE J1939 Option

Introduction to SAE J1939

Society of Automotive Engineers standard SAE J1939 is the vehicle bus recommended practice used for communication and diagnostics among vehicle components. Originating in the car and heavy-duty truck industry in the United States, it is now widely used in other parts of the world. SAE J1939 is a higher-layer protocol (e.g., an add-on software) that uses the CAN Bus technology as a physical layer. In addition to the standard CAN Bus capabilities, SAE J1939 supports node addresses, and it can deliver data frames longer than 8 bytes (in fact, up to 1785 bytes).

Since DIAS’s demonstrator vehicle is a Ford Otosan truck that follows SAE J1939 standard, it is necessary for KUKSA to adapt the standard.

The normal DBC file is used to apply identifying names, scaling, offsets, and defining information, to data transmitted within a CAN frame. The J1939 DBC file is designed to serve the same purposes but aiming at data transmitted within a Parameter Group Number (PGN) unit. This is due to the fact that some data frames are delivered in more than one CAN frame depending on the PGN’s data length in J1939.

To simply put, one can take a look at one PGN example. The following PGN-65251 information is captured in the official SAE J1939-71 documentation revised in 2011-03 (PDF Download Link).

_images/pgn_65251.PNG

PGN-65251 defines “Engine Configuration 1 (EC1)” and consists of 39 bytes as stated in “Data Length”. This means that to receive the complete information of PGN-65251, at least 6 CAN frames are required when considering the length of a single CAN frame is 8 bytes:

6 = 1 * TP.BAM + 5 * TP.DT

  • A Transfer Protocol Broadcast Announce Message (TP.BAM) is used to inform all the nodes (e.g., Raspberry-Pi) of the network that a large message is about to be broadcast and defines the parameter group (The Target PGN) and the number of total packets to be sent. After TP.BAM is sent, a set of TP.DT messages are sent at specific time intervals.
  • A Transfer Protocol Data Transfer (TP.DT) is an individual packet of a multipacket message transfer. It is used for the transfer of data associated with parameter groups that have more than 8 bytes of data (e.g., PGN-65251: 39 bytes).

For example, one TP.BAM and three TP.DT messages would be sent to deliver a parameter group that has more than 20 bytes (PGN-65260) as illustrated below:

_images/j1939_transport_protocol.png

There are a lot of concepts defined in the SAE J1939 documentation that are required to conform the J1939 transport protocol. One can look into the documentation to understand the concepts in depth. However, the general premise is simple: Raw CAN frames are processed to produce PGN data that should be decoded into CAN signals consumed by an in-vehicle application. Having said that, finding an existing J1939 library that can convert raw CAN frames to PGN data should be the first step. Since dbcfeeder.py is written in Python, it makes sense to choose the library written in the same language.

The Python J1939 package converts raw CAN frames to PGN data and makes the data available for use. The following figures compare two scenarios where dbcfeeder.py reads CAN signals without and with J1939.

_images/dbcreader_schema.png

Without J1939, dbcfeeder.py receives decoded CAN singals through dbcreader.py that reads raw CAN frames directly from a CAN interface (e.g., can0 or vcan0).

_images/j1939reader_schema.png

With J1939, dbcfeeder.py receives decoded CAN singals through j1939reader.py (source) that reads PGN messages from the j1939.ElectronicControlUnit (ECU) class of the python j1939 package that converts raw CAN frames to PGN data. The j1939.ControllerApplications (CA) class from the python j1939 package is a superclass of j1939Reader.J1939Reader and utilizes the ECU class’s functionalities to derive PGN data.

At the time of writing this documentation, the following features are available from the python j1939 package according to here:

  • One ElectronicControlUnit (ECU) can hold multiple ControllerApplications (CA)
  • ECU (CA) Naming according SAE J1939/81
  • Full support of transport protocol according SAE J1939/21 for sending and receiveing
    • Message Packaging and Reassembly (up to 1785 bytes)
      • Transfer Protocol Transfer Data (TP.DT)
      • Transfer Protocol Communication Management (TP.CM)
    • Multi-Packet Broadcasts
      • Broadcast Announce Message (TP.BAM)

Implementation to j1939reader.py

A sophisticated example of j1939.ControllerApplication that receives PGN messages from j1939.ElectronicControlUnit is already introduced here as OwnCaToProduceCyclicMessages. When running the OwnCaToProduceCyclicMessages class and a J1939 CAN log file together, the following messages can be shown on the OwnCaToProduceCyclicMessages’s terminal.

_images/OwnCaToProduceCyclicMessages.PNG

As shown above, each line prints out the number and the length of a PGN that has been read. These messages are produced from a callback function called OwnCaToProduceCyclicMessages.on_message.

_images/on_message.PNG

As already mentioned, the general premise is that Raw CAN frames are processed to produce PGN data that should be decoded into CAN signals consumed by an in-vehicle application. Here we can divide the premise into three requirements:

  1. Getting PGN data
  2. Decoding PGN data into CAN signals
  3. Getting the decoded CAN signals available on the target in-vehicle application (e.g., dbcfeeder.py)

It is already possible to receive PGN data through OwnCaToProduceCyclicMessages (code). Also, some parts of dbcreader.py (code) can be reused for getting the decoded signals ready for the in-vehicle application.

j1939reader.py in dbcfeeder.py
1. dbcfeeder.py without J1939
_images/dbcreader_schema.png
_images/dbcfeeder_import.PNG
_images/dbcfeeder_lines.PNG

In the case without J1939, dbcfeeder.py imports dbcreader.py and passes the required arguments when creating an instance of dbcreader.DBCReader. Then the dbcreader.DBCReader instance starts a thread by running start_listening() and receiving CAN frames through its connected CAN interface (cfg['can.port']).

2. dbcfeeder.py with J1939
_images/j1939reader_schema.png
_images/dbcfeeder_import_modified.PNG
_images/dbcfeeder_lines_modified.PNG

Likewise, in the case with J1939, dbcfeeder.py imports j1939reader.py instead of dbcreader.py and passes the required arguments when creating an instance of j1939reader.J1939Reader. Then the j1939reader.J1939Reader instance starts a thread by running start_listening() and receiving PGN data through a j1939.ElectronicControlUnit instance that is connected to the passed CAN interface (cfg['can.port']).

Decoding PGN Data with j1939reader.py

j1939reader.py (code) reuses OwnCaToProduceCyclicMessages and dbcreader.py for the requirement A and C with the add-on PGN decode functionality for the requirement B that is closely explained in the following.

1. Function: start_listening
_images/start_listening.PNG

start_listening creates a j1939.ElectronicControlUnit instance and connects it to the passed CAN interface (cfg['can.port']). Then the ECU instance adds the current j1939reader.J1939Reader (precisely, j1939.ControllerApplication inherited by j1939reader.J1939Reader) instance and starts a thread of it. After running start_listening, the ECU instance can start reading raw CAN frames from the connected CAN interface, convert them into PGN data and send the result to a callback function, on_message, of the j1939reader.J1939Reader instance.

2. Function: on-message
_images/on_message-modified.PNG

The callback function, on_message, receives PGN data and finds a corresponding CAN message in self.db by running identify_message. If the return value of identify_message is not None, it means that the observed PGN has the corresponding message and thus it iterates the list of signals of the message and decodes each signal and puts the result in self.queue by running put_signal_in_queue.

3. Function: identify_message
_images/identify_message.PNG

identify_message examines the database instance (self.db) that has been built with the passed DBC file (cfg['vss.dbcfile']) to get a message (cantools.database.can.Message) that corresponds to the observed PGN. Because PGN is the only available parameter that can identify what parameter group a CAN message is intended for, understanding how a CAN frame (especially CAN-ID) is structured is important so that the application can compare the observed PGN to a comparison message’s ID to confirm whether or not they match.

In the case of PGN-61444 (Electronic Engine Controller 1 / EEC1), it is (0x)f004 when 61444 is converted to hex. Therefore, identify_message should find a CAN message with an ID that contains f004 among the messages from self.db. The IDs of all messages in self.db are determined based on the passed DBC file (cfg['vss.dbcfile']). The following image (source) shows how a J1939 DBC file looks like.

_images/CAN-DBC-File-Format-Explained-Intro-Basics_2.png

The needed information in the above image is CAN ID: 2364540158. It is (0x)8CF004FE When 2364540158 is converted to hex. To understand what exactly (0x)8CF004FE indicates, one can refer to the following image that explains the J1939 message format.

_images/j1939_message_format.png

As described above, CAN ID consists of 29 bits in J1939. To express the value on a bit level, the binary conversion needs to be applied to (0x)8CF004FE, making it (0b) 1000 1100 1111 0000 0000 0100 1111 1110. With this, the following information can be derived.

ID Form Correponding Value of ECC1
PGN 61444
PGN in hex (0x) f004
PGN in binary (0b) 1111 0000 0000 0100
DBC ID 2364540158
DBC ID in hex (0x) 8cf004fe
DBC ID in binary (0b) 1000 1100 1111 0000 0000 0100 1111 1110

Since the number of binary numbers is 32 (bits) making it bigger than 29 (bits), the first three binary numbers are omitted: (0b) 0 1100 1111 0000 0000 0100 1111 1110. With this and the message format image, the folloiwng information can be derived from the ECC1 message ID.

J1939 Message Info Binary Decimal Hex
3 Bit Priority (0b) 0 11(00) 3 (0x) c
18 Bit PGN (0b) (00) 1111 0000 0000 0100 61444 (0x) f004
8 Bit Source Address (0b) 1111 1110 254 (0x) fe

As shown above, the decimal value of ECC1 message ID’s PGN is the same as 61444 which means that it is possible to confirm whether or not one of the CAN messages in self.db has the same value of PGN as that of the observed PGN. identify_message converts the observed PGN into a hex value and compare the value to the hex PGN value of each message in self.db. If the hex value of the observed PGN matches with that of the comparison message’s PGN, it means that the comparison message is what the observed PGN indicates and thus the message is returned.

4. Function: put_signal_in_queue
_images/put_signal_in_queue.PNG

Once the target message is returned by identify_message, on_message iterates the list of signals in the returned message and puts each signal (cantools.database.can.Signal) with its calculated value in the queue (self.queue) that would later be used to feed kuksa-val-server by running put_signal_in_queue. In put_signal_in_queue, there are two scenarios. One is when the type of data is “list”, and the other is when the type of data is “bytearray” as shown below.

_images/data_type.PNG

In the scenario where the data type is “list”, the size of data is more than a CAN frame’s maximum payload of 8 bytes (e.g., 39 bytes with PGN-65251) in which case data comes in a form of a list of decimal numbers. In this case, the start byte and the length of data should be calculated as each number represents a byte’s decimal value and the data access is done on 1 byte basis. For example, if the DBC file describes that the observed signal’s start bit is 16 (It starts from 0 in DBC files) and its length is 16, it means that the number of start byte is 2 (starts from 0) and the length of data is 2. Which means that the third and fourth numbers in the list express the observed signal’s value. With this information, decode_signal calculates the value of the observed signal with other attributes described by the DBC file.

In the other scenario where the data type is “bytearray”, the size of data is 8 bytes. In this case, the data access is done on 1 bit basis and the start bit and data length can be used without any processing as they are based on a bit level. With this information, decode_byte_array directly calculates the value of the observed signal with other attributes described by the DBC file.

Once the value is calculated, it checks whether the calculated value is bigger than the signal’s maximum or its minimum. If the value is out of the allowed scope of the signal, it is changed either to minimum or maximum before it is passed to the queue (self.queue).

  • One can refer to here to find out all the available attributes from cantools.database.can.Signal. This also depends on the target DBC file.
5. Function: decode_signal
_images/decode_signal_.PNG

decode_signal is to calculate the value of the observed signal when the data access level is on a byte level in which case data comes in a form of a list of decimal numbers. If the number of bytes (data length) is equal to 1, the raw value can be directly extracted from data with the start byte number and the value of the signal can be calculated as follow:

[value] = [offset] + [raw value] * [scale] (Source)

If the number of bytes (data length) is equal to 2, this means that two decimal numbers have to be aggregated to calculate the value of the signal which is done by running decode_2bytes.

6. Function: decode_2bytes
_images/decode_2bytes_.PNG

decode_2bytes calculates the value of the observed signal when the signal is decribed with two bytes. Because each decimal number in the list can be converted to hex (e.g., 16 = 0x0f) representing a byte, the aggregation of two decimal numbers is done after coverting them to hex.

_images/endian.png

As described above, the aggregation depends on the byte order that is either “little_endian” or “big_endian”. According to here In J1939, the payload is encoded in the “little_endian” order from byte 0 to byte 7 while the bits in every byte are in the “big_endian” order as decribed in the table below.

Bytes 0 1 2 3 4 5 6 7
Bits 7..0 15..8 23..16 31..24 39..32 47..40 55..48 63..56

To get a raw value out of two hex numbers, they need to be arranged in the “big_endian” order before decimal conversion. Since the bits in every byte are already in the “big_endian” order, changing the order in a bit level is not required in any case. Therefore, in the case of “little_endian”, the start byte comes at the end whereas it comes at the beginning with “big_endian” which is highly unlikely in J1939, and the order of bits in each byte remains the same. Once the numbers are merged in a form of a hex number, the merged hex number is once again converted to decimal to describe the raw value. Then the same formula used in decode_signal is applied to calculate the result value.

7. Function: decode_byte_array
_images/decode_byte_array.PNG

decode_byte_array is to calculate the value of the observed signal when the data access level is on a bit level in which case data comes in a form of a bytearray. As explained in decode_2bytes, the payload is encoded in the same way that bytes are in the “little_endian” order and the bits in every byte are in the “big_endian” order. If the byte order is “little_endian”, the bytearray is reversed first and then converted to a list of bits by running byteArr2bitArr to produce a binary string that is later converted to integer to get the raw value. Otherwise the same process is done but without reversing the bytearray which is highly unlikely in J1939. In any case, changing the order on a bit level is not required as well.

8. Function: byteArr2bitArr
_images/byteArr2bitArr.PNG

byteArr2bitArr is to convert a bytearray to a list of bits.

Running dbcfeeder.py with j1939reader.py

  1. Clone the junh-ki/dias_kuksa repository:

    $ git clone https://github.com/junh-ki/dias_kuksa.git
    
  2. Navigate to dias_kuksa/utils/in-vehicle/j1939feeder/ and copy j1939reader.py and paste it to kuksa.val/clients/feeder/dbc2val/ where dbcfeeder.py is located.

  3. Install J1939 Python dependency:

    $ pip3 install j1939
    
  4. Come back to the Home directory and install the wheel-package:

    $ cd
    $ git clone https://github.com/benkfra/j1939.git
    $ cd j1939
    $ pip install .
    
  5. In dbcfeeder.py (kuksa.val - dbcfeeder.py Setup), any line that involves with dbcreader.py should be replaced to work with j1939reader.py.

5-1. Import part:

# import dbcreader
import j1939reader

5-2. Reader class instance creation part:

# dbcR = dbcreader.DBCReader(cfg,canQueue,mapping)
j1939R = j1939reader.J1939Reader(cfg,canQueue,mapping)

5-3. start_listening function part:

# dbcR.start_listening()
j1939R.start_listening()
  1. Make sure kuksa-val-server is up and running and a CAN interface (vcan0 or can0) is configured before running dbcfeeder.py.

  2. Navigate to kuksa.val/clients/feeder/dbc2val/ where dbcfeeder.py is located, and command the following:

    $ python3 dbcfeeder.py -d vcan0 -j ../../../certificates/jwt/super-admin.json.token --dbc dias_simple.dbc --mapping dias_mapping.yml
    
  • The following screenshots show what values are stored in kuksa-val-server at the end of playing log files (can0_otosan_can0-30092020 and can0_otosan_can2-30092020).
_images/sim_without_j1939.PNG

In the normal case, dbcfeeder.py is not able to read EngRefereneceTorque, EngSpeedAtIdlePoint1 and EngSpeedAtPoint2. These three signals belong to PGN-65251 (Engine Configuration 1 / J1939) and are delivered with a TP.BAM with multitple TP.DT messages since the size of the message is bigger than 8 bytes (size of 1 CAN frame = 8 bytes). Also, the value of Aftertreatment1IntakeNOx is 3076.75 which is not correct considering it is bigger than the signal’s maximum value in the DBC file as shown below.

_images/Aftertreatment1IntakeNOx_max.PNG

(DBC Source)

_images/sim_with_j1939_.PNG

Now not only dbcfeeder.py with j1939reader.py is able to read these signals but also the value of Aftertreatment1IntakeNOx appears to be at the signal’s maximum and the other signals’ values are different from the case without J1939 as shown above. This is due to the fact that dbcfeeder.py has followed the J1939 standard when reading signals from CAN and all the values here are valid as they appear within their designated scope in the DBC file.

Step 3: Cloud Setup

_images/cloud_schema.png

Deployment Option 1 - Manual

kuksa.cloud - Eclipse Hono (Cloud Entry)
_images/cloud_hono.png
_images/eclipse-hono.png

Eclipse Hono provides remote service interfaces for connecting large numbers of IoT devices to a back end and interacting with them in a uniform way regardless of the device communication protocol.

Bosch IoT Hub as Hono
_images/bosch-iot-hub.PNG

The Bosch IoT Hub comprises open source components developed in the Eclipse IoT ecosystem and other communities, and uses Eclipse Hono as its foundation. Utilizing Hono is essential to deal with a large amount of connected vehicles due to its scalability, security and reliability. The Bosch IoT Hub is available as a free plan for evaluation purposes. The following steps describe how to create a free Bosch IoT Hub instance.

  1. If you don’t have a Bosch ID, register one here and activate your ID through the registered E-Mail.
  2. Go to the main page and click “Sign-in” and finish signing-up for a Bosch IoT Suite account. Then you would be directed to the “Service Subscriptions” page.
  3. In the “Service Subscriptions” page, you can add a new subscription by clicking “+ New Subscription”. Then it would direct you to Product Selection Page that shows you what services can be offered. Choose “Bosch IoT Hub”.
  4. Then select “Free Plan” and name your Bosch IoT Hub instance. The name should be unique (e.g., kuksa-tut-jun) and click “Subscribe”.
  5. After that, you would see your subscription details. Click “Subscribe” again to finish the subscription process.
  6. Now you would be in Service Subscriptions Page. It would take a minute or two for your instance to change its status from “Provisioning” to “Active”. Make sure the status is “Active” by refreshing the page.
  7. When the status is “Active”, click “Show Credentials” of the target instance. Then it would show the instance’s credentials information. This information is used to go to the device registry and register your device in the further steps. (You don’t need to save this information since you can always come back to see.) Let’s copy and save the values of “username” and “password” keys under “device_registry” somewhere.
  8. Now go to Bosch IoT Hub - Management API. The Management API is used to interact with the Bosch IoT Hub for management operations. This is where you can register a device on the Bosch IoT Hub instance you’ve just created and get the tenant configuration that you would ultimately use as input arguments when running cloudfeeder.py (kuksa.val - cloudfeeder.py Setup) for a specific device (e.g., Raspberry-Pi of a connected vehicle).

8-1. Click “Authorize” and paste the “username” and “password” that you copied in 7, then click “Authorize”. If successfully authorized, click “Close” to close the authorization window.

8-2. Under the “devices” tab, you can find the “POST” bar. This is to register a new device. Click the tab and then “Try it out” to edit. Copy and paste the tenant-id of the Bosch IoT Hub instance to where it is intended to be placed.

8-3. Under “Request body”, there would be a JSON dictionary like the following:

{
    "device-id": "4711",
    "enabled": true
}

You can rename the string value of “device-id” according to your taste:

{
    "device-id": "kuksa-tut-jun:pc01",
    "enabled": true
}

8-4. Then click “Execute”. If the server responses with a code 201, it means the device is successfully registered. If you click “Execute” with the same JSON dictionary again, it would return a code 409. Which means you have tried to register the same device again so it wouldn’t register it due to the conflict with the existing one. However, if you change “device-id” to something new and click “Execute”, then it would return a code 201 because you have just registered a new device name.

  • Just like this, you can register up to 25 devices with a free plan Bosch IoT Hub instance. This means that 25 vehicles or any other IoT devices can be connected to this one Bosch IoT Hub instance and each and every one of them interacts with the instance through a unique “device-id”.
  • To list all the registered devices’ ids, you can click the “GET /registration/{tenant-id}” bar, type the instance’s tenant-id and click “Execute”. If successful, the server would return a code 200 with the device data that lists all the devices that are registered to the instance.

9. What we have done so far is, create a Bosch IoT Hub instance and register devices in it. However, we haven’t yet configured credentials for each device. Credential information helps you access to a specific device that is registered in the instance. The following steps illustrate how to add new credentials for a device.

9-1. Under the “credentials” tab, find and click the “POST” bar.

9-2. Click “Try it out” and paste the tenant-id of the Bosch IoT Hub instance to where it is intended to be placed.

9-3. In the JSON dictionary, change the value of “device-id” to the target device-id’s value.

9-4. Set values of “auth-id” and “password” according to your preference:

{
    "device-id": "kuksa-tut-jun:pc01",
    "type": "hashed-password",
    "auth-id": "pc01",
    "enabled": true,
    "secrets": [
        {
            "password": "kuksatutisfun01"
        }
    ]
}

If the server responses with a code 201, it means that new credentials have been added successfully.

  • Here the values of “auth-id” and “password” are used to run cloudfeeder.py. Therefore it is recommended to save them somewhere.

9-5. Now we have all information to run cloudfeeder.py:

  1. With the information in 9-5 (should be different in your case), we can run cloudfeeder.py (kuksa.val - cloudfeeder.py Setup). Navigate to kuksa.val/vss-testclient/ and command:

    $ python3 cloudfeeder.py --host mqtt.bosch-iot-hub.com -p 8883 -u pc01@td23aec9b9335415594a30c7113f3a266 -P kuksatutisfun01 -c iothub.crt -t telemetry
    
kuksa.cloud - InfluxDB (Time Series Database)
_images/cloud_influxdb.png

Now that we have set up a Hono instance, cloudfeeder.py can send the telemetry data to Hono every one to two seconds. Hono may be able to collect all the data from its connected vehicles. However, Hono is not a database, meaning that it doesn’t store all the collected data in itself. This also means that we have to hire a time series database manager that can collect and store the data received by Hono in chronological order.

InfluxDB is another kuksa.cloud’s component, that is an open-source time series database. In KUKSA, InfluxDB is meant to be used as the back-end that stores the data incoming to Hono. With InfluxDB, we can make use of the collected data not only for visualization but also for a variety of external services such as a mailing service or an external diagnostic service. InfluxDB should be located in the northbound of Hono along with Hono-InfluxDB-Connector that should be placed in-between Hono and InfluxDB.

  • To set up InfluxDB and Hono-InfluxDB-Connector, we can use a Linux machine (Linux Machine (Data Consumer)). Based on Hono, the Linux machine here can be considered as a data consumer while the in-vehicle Raspberry-Pi is considered as a data publisher.
  • The following steps to setup InfluxDB is written based on this tutorial.
  1. VirtualBox with Ubuntu 18.04 LTS is used here for setting up InfluxDB and Hono-InfluxDB-Connector. (VM Setup Tutorial can be found here.) (If your default OS is already Linux, this step can be skipped.)

  2. Run your Virtual Machine (VM) and open a terminal.

  3. Before InfluxDB installation, command the following:

    $ sudo apt-get update
    
    $ sudo apt-get upgrade
    
    $ sudo apt install curl
    
    $ curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
    
    $ source /etc/lsb-release
    
    $ echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
    
  4. Then install InfluxDB:

    $ sudo apt-get update && sudo apt-get install influxdb
    
  5. Start InfluxDB:

    $ sudo service influxdb start
    
  • If there is no output produced from this command, you have successfully set up InfluxDB on your VM. Please continue with 6 if you want to know how to interact with InfluxDB through a Command Line Interface (CLI). Otherwise, you can directly move onto Hono-InfluxDB-Connector (dias_kuksa - Hono-InfluxDB-Connector).
  1. Connect to InfluxDB by commanding:

    $ influx
    
  • After this command, you would be inside the InfluxDB shell.
  1. Create a database, “kuksademo”, by commanding inside the InfluxDB shell:

    > CREATE DATABASE kuksademo
    
  • This command produces no output, but when you list the database, you should see that it was created.
  1. List the database by commadning inside the InfluxDB shell:

    > SHOW DATABASES
    
  2. Select the newly created database, “kuksademo”, by commanding inside the InfluxDB shell:

    > USE kuksademo
    
  • It should produce the following output on the terminal: “Using database kuksademo”
  1. Insert some test data using the following command:

    > INSERT cpu,host=serverA value=0.64
    
  • More information about inserting data can be found here
  1. The insert command does not produce any output, but you should see your data when you perform a query:

    > SELECT * from cpu
    
  2. Type “exit” to leave the InfluxDB shell and return to the Linux shell:

    > exit
    
  3. (Optional) If you want to write test data from the Linux shell, you can run the following one line script:

    $ while true; do curl -i -XPOST 'http://localhost:8086/write?db=kuksademo' --data-binary "cpu,host=serverA value=`cat /proc/loadavg | cut -f1 -d ' '`"; sleep 1; done
    
  • This command will write data to the kuksademo database every 1 second.
  1. You can verify if data is being sent to InfluxDB by using the influx shell and running a query:

    > influx
    > USE kuksademo
    > SELECT * FROM cpu
    
dias_kuksa - Hono-InfluxDB-Connector
_images/cloud_hono-influxdb-connector.png

Now that Hono and InfluxDB are set up, we need a connector application to transmit the incoming data from Hono to InfluxDB. cloudfeeder.py produces and sends Hono the result telemetry messages in a form of JSON dictionary. Therefore the connector application should be able to read the JSON dictionary from Hono, map the dictionary to several individual metrics and send them to InfluxDB by using the curl command.

  • Since the messaging endpoint of Hono (Bosch IoT Hub) follows the AMQP 1.0 protocol, the connector application should also be AMQP based.
  • An AMQP Based connector application can be found in dias_kuksa/utils/cloud/maven.consumer.hono from the junh-ki/dias_kuksa repository. The application is written based on iot-hub-examples/example-consumer from the bosch-io/iot-hub-example respoitory.
  1. To set up the connector, you have to clone the junh-ki/dias_kuksa repository on your machine first:

    $ git clone https://github.com/junh-ki/dias_kuksa.git
    
  2. Navigate to dias_kuksa/utils/cloud/maven.consumer.hono and check README.md. As stated in README.md, there are three prerequisites to be installed before running this application.

2-1. Update the system:

$ sudo apt update
$ sudo apt upgrade

2-1. Install Java (OpenJDK 11.0.8):

$ sudo apt install openjdk-11-jre-headless openjdk-11-jdk-headless
$ export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/
$ echo $JAVA_HOME

2-2. Install Maven (Apache Maven 3.6.0):

$ sudo apt install maven
$ mvn --version

2-3. Install mosquitto-clients:

$ sudo apt install mosquitto-clients

2-4. Install curl:

$ sudo apt install curl
  1. Navigate to dias_kuksa/utils/cloud/maven.consumer.hono/ and command the following:

    $ mvn clean package -DskipTests
    
  • This command compiles the src folder with Maven and produces the target folder that contains a .jar formatted binary file, maven.consumer.hono-0.0.1-SNAPSHOT.jar.
  1. Now that you have the binary file, you can execute the connector application. In the same directory, dias_kuksa/utils/cloud/maven.consumer.hono/, command the following:

    $ java -jar target/maven.consumer.hono-0.0.1-SNAPSHOT.jar --hono.client.tlsEnabled=true --hono.client.username={messaging-username} --hono.client.password={messaging-password} --tenant.id={tenant-id} --device.id={device-id} --export.ip={export-ip}
    
  • (Bosch IoT Hub) The corresponding info (messaging-username, messaging-password, tenant-id, device-id) can be found in Service Subscriptions Page.
  • If InfluxDB is deployed manually, export-ip shall be set to: localhost:8086.
  • The startup can take up to 10 seconds. If you are still running cloudfeeder.py, the connector application should print out telemetry messages on the console.
  1. (Optional) If you want to change the way the connector application post-processes telemetry messages, you can modify ExampleConsumer.java that can be found in the directory: dias_kuksa/utils/cloud/maven.consumer.hono/src/main/java/maven/consumer/hono/.
  • The method, handleMessage, is where you can post-process.
  • The content variable is where the received JSON dictionary string is stored.
  • To seperate the dictionary into several metrics and store them in a map, the mapJSONDictionary method is used.
  • Each metric is stored in a variable individually according to its type and sent to the InfluxDB server through the curlWriteInfluxDBMetrics method.
  • You can add the post-processing part before curlWriteInfluxDBMetrics if necessary.
kuksa.cloud - Grafana (Visualization Web App)
_images/cloud_grafana.png

So far we have successfully managed to set up Hono and InfluxDB, and transmit data incoming to Hono to InfluxDB by running Hono-InfluxDB-Connector. Now our concern is how to visualize the data inside InfluxDB. One way to do this is to use Grafana.

Grafana is a multi-platform open source analytics and interactive visualization web application. The idea here is to get Grafana to read InfluxDB and visualize the read data.

  • The installation steps to setup Grafana is written based on here.
  1. To install Grafana (stable version 2.6) on your VM, run following commands:

    $ sudo apt-get install -y apt-transport-https
    $ sudo apt-get install -y software-properties-common wget
    $ wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
    $ echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list
    $ sudo apt-get update
    $ sudo apt-get install grafana
    
  2. Start Grafana service:

    $ sudo service grafana-server start
    
  • If this command doesn’t work, list PIDs on port 3000 (Grafana uses port 3000) to see whether grafana-server is already running on one of them:

    $ sudo apt install net-tools
    $ sudo netstat -anp tcp | grep 3000
    
  • assuming the PID number is: 13886:

    $ sudo kill 13886
    $ sudo service grafana-server start
    
  1. Check whether the Grafana instance is running:

    $ sudo service grafana-server status
    
  • ctrl + c to get out.
  1. Now that the Grafana server is running on your machine, you can access to the server by using a web-browser. Open a browser and access to the following address:

    http://localhost:3000/
    
  2. Log in with the admin account:

    Email or username: admin
    Password: admin
    
  3. After logging in, click “Configuration” on the left, click “Add data source” and select “InfluxDB”.

  4. Then you would be in the InfluxDB Settings page. Go to “HTTP” and set URL as follow:

    URL: http://localhost:8086
    
  5. Then go to “IndluxDB Details”. Here we are going to select the “kuksademo” database that we have created to test InfluxDB. You can also choose another database that Hono-InfluxDB-Connector has been sending data to. To choose “kuksademo”, enter in the following information:

    Database: kuksademo
    User: admin
    Password: admin
    HTTP Method: GET
    
  6. Click “Save & Test”. If you see the message, “Data source is working”, it means that Grafana has been successfully connected to InfluxDB.

  7. Now you can create a new dashboard. Click “Create” on the left and click “Add new panel”.

  8. Then you would be in the panel editting page. You can choose what metrics you want to analyze. This depends entirely on what metrics you have been sending IndluxDB. Since the metrics we have created in “kuksademo” is cpu, you can set the following information:

    FROM: default cpu

  9. Click “Apply” on the upper right. Now a new dashboard with a panel has been created, you can change the time scope, refresh or save the dashboard on the top.

  • In the same way, you can create multiple panels in the dashboard for different metrics.

Deployment Option 2 - Docker Compose

Deployment Option 1 - Manual has been introduced to understand what kinds of cloud components are used for kuksa.cloud and how to configure them so that they can interact with each other. However, deploying each and every cloud component, configuring them, setting a data source for Grafana and designing a dashboard of it manually is not plausible when considering a huge number of connected vehicles. This is where container technology like Docker comes into play. A couple of key concepts are described below:

  • Docker Container: A standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
  • Docker Compose: A tool for defining and running serveral Docker containers. A YAML file is used to configure the application’s services.
  • Kubernetes: One difference between Docker Compose and Kubernetes is that Docker Compose runs on a single host, whereas Kubernetes is for running and connecting containers on multiple hosts.

The key point of using Docker is to facilitate automation so that users can deploy the applications in an agile and efficient way. To learn all the concepts and basics of Docker and be familiar with them, you can follow this tutorial. The subsequent contents are written based on the assumption that readers are familiar with Docker.

In the case of DIAS-KUKSA, there are two deployment options that utilize Docker:

  • Docker Compose
  • Azure Kubernetes Service(AKS)

When deploying with Docker Compose, it is assumed that a Bosch-IoT-Hub instance is already up and running. Therefore the deployment only includes: Hono-InfluxDB-Connector, InfluxDB and Grafana. Docker Compose runs only on a single host (a single Ubuntu machine). Even though it can only take care of a single connected vehicle, deploying with Docker Compose can be advantageous because it eases development process by reducing time and effort spent on setting deployment configuration for each application and creating the identical Grafana dashboard. Therefore Docker Compose deployment can be applicable for deveopment, test and evaluation purposes.

On the other hand, AKS includes all the cloud components (Eclipse Hono, Hono-InfluxDB-Connector, InfluxDB and Grafana) and runs on multiple hosts, meaning that it can be highly advantageous for commercial distribution that deals with a large amount of data transference involving with a number of connected vehicles. The downside of using AKS is that it costs money since the service is offered by Microsoft Azure and also the deployment configuration is more intricate. Therefore using AKS would be more favorable for commercial distribution rather than a development purpose.

In this part, Docker Compose deployment is closely covered. The contents include:

  1. How to install Docker and Docker Compose
  2. How to modify the Hono-InfluxDB-Connector Docker image.
  3. How to set data sources and dashboards on Grafana’s according to your use-case.
  4. How to setup docker-compose.yml for the KUKSA cloud components (Hono-InfluxDB-Connector, InfluxDB and Grafana)
  5. How to deploy the KUKSA cloud components with Docker Compose.

The end-goal here is to deploy these applications as Docker containers as the figure below and establish connectivity among these containerized applications.

_images/docker_example.png
Installing Docker and Docker Compose
  1. Install Docker from the standard Ubuntu repository:

    $ sudo snap install docker
    
  • If you don’t install Docker with snap, it is possible to face version conflict with Docker Compose.
  • Docker installation with snap includes Docker Compose installation.
  1. Check the version:

    $ docker --version
    $ docker-compose --version
    
  2. If you don’t want to preface the docker command with sudo, create the docker group and add your user to the docker group:

    $ sudo groupadd docker
    $ sudo usermod -aG docker $USER
    $ newgrp docker
    
  3. Log out and log back in to re-evaluate your group membership.

  4. Run docker commands without sudo to verify that the changes have been applied:

    $ docker run hello-world
    
_images/hello-world.PNG

Now you are ready to proceed. If you only want to test the connectivity with the default DIAS-KUKSA setting, you can directly go to Deployment with Docker Compose.

Modifying and creating a Docker image for Hono-InfluxDB-Connector

Unlike InfluxDB and Grafana, Hono-InfluxDB-Connector is an application that is only designed to serve a particular task. This means that the application needs to be changed according to the target metrics. Since the application cannot be generic but only user-specific, it is important to understand how to make changes on the application, build a new Docker image with the new changes and push it to the Docker Hub registry. One might ask why the application needs to be docker-containerized and pushed to Docker Hub when one could simply run the result Jar file on a local machine. This can be easily explained with the figure below.

_images/docker-compose-scenario.png

The figure describes the following scenario:

  1. Docker Host 1 builds the Hono-InfluxDB-Connector image by running its Dockerfile. During the build process, Maven and Java images are pulled to build the executable Jar file.
  2. After the Jar file is created, the Docker image is produced. Then Docker Host 1 pushes the Jar file to the Docker Hub registry in the Internet. (To do this, one needs to login to DockerHub on a local terminal to designate the destination repository.)
  3. Once the Hono-InfluxDB-Connector image is available on Docker Hub, the other hosts (2, 3, 4) can also use the image as long as the Internet access is available and Docker (and Docker Compose) is (are) installed locally. Finally the other Docker hosts (2, 3, 4) pull and run Hono-InfluxDB-Connector along with InfluxDB and Grafana through Docker Compose. The produced containers from Docker Compose are set to interact with each other according to the configuration setting in docker-compose.yml.

As already mentioned in 3), it doesn’t require for the rest of the Docker hosts (2, 3, 4) to pull and update the code according to the recent changes and build it with Maven to create the executable Jar file because the updated Hono-InfluxDB-Connector Docker image is already available on Docker Hub. All they need are Docker and Docker Hub installed locally with the Internet access and the pull-address of the updated image. This makes it possible to avoid repetitive tasks such as: pulling the source code repository, making changes and building the application with Maven to create the executable Jar file. In this way, a user can simply pull the application image from Docker Hub and run a container out of the image.

  1. Make changes in dias_kuksa/utils/cloud/maven.consumer.hono/src/main/java/maven/consumer/hono/ExampleConsumer.java according to your purpose.
_images/connector_changes.PNG
  • The changes should be made depending on the telemetry message sent by cloudfeeder.py. Please consider the format of the message or the availability of intended metrics in the message.
  1. To create a Docker image out of Hono-InfluxDB-Connector, a Dockerfile is required. The Dockerfile for Hono-InfluxDB-Connector is located in dias_kuksa/utils/cloud/maven.consumer.hono/. The Dockerfile consists of two different stages: Jar Building and Image Building. The Dockerfile can be self-explained with the comments in it. Navigate to dias_kuksa/utils/cloud/maven.consumer.hono/ and build the Docker image by commanding:

    $ docker build -t hono-influxdb-connector .
    
  2. Assuming a Docker Hub account has already been made (Please make it in this link if you haven’t), log into Docker Hub on your terminal by commanding:

    $ docker login --username={$USERNAME} --password={$PASSWORD}
    
  3. Before pushing hono-influxdb-connector to your Docker Hub repository, tag it according to the following convention:

    $ docker tag hono-influxdb-connector {$USERNAME}/hono-influxdb-connector
    

This way, the tagged Docker image would be directed to your respository on Docker Hub and archieved there when pushed.

  1. Push the tagged Docker image:

    $ docker push {$USERNAME}/hono-influxdb-connector
    
  2. (Optional) When you want to pull the image from Docker Hub on another Docker host, simply command:

    $ docker pull {$USERNAME}/hono-influxdb-connector
    
Configuring a Grafana’s Data Source, Dashboard and Notifier
_images/dashboards.PNG

The above shows 7 dashboards that are created based on Bosch’s DIAS-KUKSA implementation. The following is one of the first 6 NOx-map dashboards.

_images/nox_map-tscr_bad.PNG

As named in the screenshot above, the depicted dashboard represents “DIAS-BOSCH NOx Bin Map - TSCR (Bad)” that consists of 12 status panels that each of which describes a data bin and has three metrics: Sampling Time (s), Cumulative NOx DS (g) and Cumulative Work (J). Each and every metric here comes from the InfluxDB data source. The rest of the first 6 dashboards follow the same format. The following is the last dashboard.

_images/total_sampling_time.PNG

As shown above, the last dashboard is to keep track of the cumulative time of bin-data sampling. This dashboard is meant to send the administrator user an alert through a notifer feature if a certain sampling time threshold is met.

All these dashboards are simply designed to monitor a specific set of data stored in InfluxDB by Hono-InfluxDB-Connector conforming their intended purposes.

Since the Grafana Docker image is offered without any pre-configured dashboard and panel options, it could be easily presumed that users might have to set InfluxDB as a data source, create these dashboards with multiple panels and set a notifier via Email in Grafana manaually for several Docker hosts (Virtual Machines) everytime they deploy the application, which takes a lot of handwork and can be considered significantly inefficient.

Grafana’s provisioning system helps users with this problem. With the provisioning system, data sources, dashboards and notifiers can be defined via config files such as YML and JSON that can be version-controlled with Git.

  1. To set data sources when deploying Grafana with Docker Compose, a YML configuration file can be used. Under dias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/grafana_config/grafana-provisioning/, there is datasources/ with datasource.yml inside.
_images/datasource.PNG
  • datasource.yml contains the same information used to set a data source manually on the Grafana web-page (Grafana Server > Configuration > Add data source: “InfluxDB”, “URL”, “Database”, “User”, “Password”).
  1. Likewise, to set data sources when deploying Grafana with Docker Compose, a YML and a JSON configuration files can be used. Under the same /grafana-provisioning/ directory, there is dashboards/ with dashboard.yml and nox_map_dashboard.json inside.
_images/dashboard.PNG
  • dashboard.yml states the name of the data source that dashboards receive data from and the path that the file would be located inside the Grafana container when it runs.
_images/nox_map_dashboard_json.PNG
  • To create such dashboard JSON file, one needs to create a dashboard manually on Grafana, and export it as a JSON file (Grafana Server > Dashboards > Your_Target_Dashboard > Save dashboard (on the top) > “Save JSON to file”). Then rename it according to your preference. (e.g., nox_map_dashboard.json)
  1. As stated earlier, the last panel with the title of “Cumulative Bin Sampling Time” keeps track of the cumulative sampling time of data collection. If the point of evaluation is set to 10 hours, the threshold of the panel for notification would be 36000 considering sampling is done every second (10h = 600m = 36000s) approximately. When it finally reaches the threshold, Grafana would send a message to the registered email to notify the user that it is time to evaluate which can be done by setting notifier.yml in /grafana-provisioning/notifiers/.
_images/notifier.PNG
  • notifier.yml states the type of notifier (e.g., Email, Slack, Line, etc…) and the receiver’s addresses in case when Email is chosen as the notifier type. If there are more than one receivers, multiple addresses can be added with semi colons that separate email addresses as shown in the screenshot. The result can be checked in Alerting > Notification Channels in the Grafana web-server page.
_images/alert_rules_.PNG
_images/sent_email_.jpg
  • Now that you have set a notifier, you have to set an alert rule for you to receive a message from Grafana in a certain condition. The first screenshot above shows a condition that the alert is triggered when the query A, total_sampling_time, is above 300. The second screenshot above shows the kind of message a receiver’s phone would receive via Gmail if the condition is met.
_images/grafana_ini.PNG
  • grafana.ini is located in dias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/grafana_config/ and needs to be configured to enable SMTP (Simple Mail Transfer Protocol). Simply speaking, this is to set a sender’s Email account. In the case of Gmail, the address of SMTP host server is smtp.gmail.com:465 (Click here to learn more about SMTP servers). Then set the sender’s Email address, user, and password, password. To use a Gmail account, one needs to have 2FA enabled for the account and then create an APP password for password (Click here to learn more about the APP password). from_address and from_name are set to change the sender’s information in the receiver’s perspective.

    • At the time of writing this documentation, only the graph panel visualization supports alerts as stated here.

It can be noticed that all configuration files for Grafana are located under /grafana_config/grafana-provisioning/ and /grafana_config/. These directories would later be used by Docker Compose to provision Grafana with data sources, dashboards and notifiers. Next, the explanation to the Docker Compose configuration file is followed.

Configuration Setup
_images/docker-compose_yml_.PNG
  1. docker-compose.yml runs three services (InfluxDB, Hono-InfluxDB-Connector, Grafana) here. Since all three services should be connected to each other, they need to be under the same network. Therefore a user-defined bridge network, monitor_network, needs to be configured under every service here:

    networks:
      - monitor_network
    
    networks:
      monitor_network:
    
  2. Hono-InfluxDB-Connector`(`connector) and Grafana`(`grafana) have a dependency on InfluxDB`(`influxdb). Therefore a dependency needs to be configured under connector and grafana:

    depends_on:
      - influxdb
    
  3. Since the connector service is just a data intermediary, it doesn’t need to be persistent. On the other hand, influxdb and grafana should be persistent if a user wants to save the accumulated data or metadata even when the services are taken down. Therefore a user-defined volume needs to be configured under each of influxdb and grafana:

    volumes:
      - influxdb-storage:/var/lib/influxdb
    
    volumes:
      - grafana-storage:/var/lib/grafana
      - ./grafana_config/grafana.ini:/etc/grafana/grafana.ini
      - ./grafana-provisioning/:/etc/grafana/provisioning/
    
    volumes:
      influxdb-storage:
      grafana-storage:
    

Here, /grafana_config/grafana.ini:/etc/grafana/grafana.ini and /grafana-provisioning/:/etc/grafana/provisioning/ are additionally added for grafana. These are to provision grafana with the datasource, dashboard and notifier that have been configured in Configuring a Grafana’s Data Source, Dashboard and Notifier. Therefore docker-compose.yml finds grafana_config/grafana.ini and grafana-provisioning/ in the current directory and map them to /etc/grafana/grafana.ini and /etc/grafana/provisioning/ respectively that are in the grafana Docker service’s file system. Likewise, each of internally defined volumes (influxdb-storage and grafana-storage) are mapped to the corresponding directory in the target service’s file system.

  1. The information of username and password to connect to each influxdb and grafana server, and that of the target Bosch-IoT-Hub instance can be provided for the connector service with the env file as they can be dynamic depending on the user. env is in the same directory where docker-compose.yml is located and is hidden by default.
_images/env_file.PNG

The information needs to be stated in docker-compose.yml as well:

environment:
  - INFLUXDB_DB=dias_kuksa_tut
  - INFLUXDB_ADMIN_USER=${INFLUXDB_USERNAME}
  - INFLUXDB_ADMIN_PASSWORD=${INFLUXDB_PASSWORD}

command: --hono.client.tlsEnabled=true --hono.client.username=messaging@${HONO_TENANTID} --hono.client.password=${HONO_MESSAGINGPW} --tenant.id=${HONO_TENANTID} --export.ip=influxdb:8086

environment:
  - GF_INSTALL_PLUGINS=natel-plotly-panel,vonage-status-panel # to add plugins
  - GF_SECURITY_ADMIN_USER=${GRAFANA_USERNAME}
  - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
  • INFLUXDB_DB=dias_kuksa_tut: The database is set as dias_kuksa_tut because it is the name of the database that Hono-InfluxDB-Connector is targetting at.
_images/target_database.PNG
  • export.ip follows {$SERVICE_NAME_IN_DOCKER-COMPOSE-FILE}:{$PORT_NUMBER_IN_DOCKER-COMPOSE-FILE}. Therefore it is influxdb:8086.
  • GF_INSTALL_PLUGINS=natel-plotly-panel,vonage-status-panel: The NOx Map dashboard that we are trying to provision uses the vonage-status-panel plugin that is not provided by default. natel-plotly-panel is just addtional to show how multiple panel-plugins can be added.
Deployment with Docker Compose
  1. Make sure a Bosch-IoT-Hub instance is up and running. If you haven’t brought it up, please do it now by following kuksa.cloud - Eclipse Hono (Cloud Entry).
  2. Make sure you have Docker and Docker Compose installed in your machine. If you haven’t installed, please do it now by following Installing Docker and Docker Compose.
  3. In the dias_kuksa repository, you can find the docker-compose.yml file in ‘dias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/’. With one command you can deploy all the applications according to the pre-configured setting in the file. But there are few things that need to be done by each user.

3-1. In env, change HONO_TENANTID and HONO_MESSAGINGPW according to your Bosch-IoT-Hub instance’s credentials.

3-2. According to docker-compose.yml, influxDB, connector and grafana are deployed on port 8086, 8080 and 3000 respectively. Therefore the corresponding ports should be available before running Docker Compose. To see the availability of a certain port, one can use net-tools. With this, one can also kill any service that is running on a certain port to make it available for the target application. Install net-tools and list PIDs on port 8086 (InfluxDB - 8086, Connector - 8080, Grafana - 3000):

$ sudo apt install net-tools
$ sudo netstat -anp tcp | grep 8086

By now, a list of PIDs would be shown on the terminal.

3-3. Assuming the number of PID that is running on port 8086 is 13886, you can kill the PID with the following command:

$ sudo kill 13886

3-4. Stop InfluxDB and Grafana if they are already running locally without using Docker:

$ sudo service influxdb stop
$ sudo service grafana-server stop
  • Because they are set to be running on port 8086, 8080 and 3000 respectively, it makes sense to stop them to secure the corresponding ports before running Docker Compose.
  1. Now that you have made sure all three ports (8080, 8086 and 3000) are available, navigate to dias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/ where the docker-compose.yml file is located and command the following:

    $ docker-compose up -d
    

If there is no error output, you have successfully deployed all applications configured in the docker-compose.yml file.

  1. Double-check whether three containers are created and working properly:

    $ docker ps
    

Make sure Hono-InfluxDB-Connector, InfluxDB and Grafana are in the “Up” status.

  1. Now you should be able to access to the Grafana server through a web-browser.

6-1. Open a browser and access to http://0.0.0.0:3000/.

6-2. Log in with the admin account:

Email or username: admin
Password: admin

6-3. You can access and monitor the provisioned NOx map dashboard (Dashboards > NOx Map Dashboard). Change the time range according to your preference.

In case where the provisioned dashboard is not displayed on the main page, please hover over “Dashboards” on the left-side bar and then go to “Manage”. You would be able to see “NOx Map Dashboard” under the “General” folder.

<Additional Docker Compose commands>

  • To stop your services once you have finished with them::
    $ docker-compose down
  • To also remove the data volume used by the containers::
    $ docker-compose down –volumes

Deployment Option 3 - Azure Kubernetes Service (AKS)

** WORK IN PROGRESS… **

(Additional) dias_kuksa - InfluxDB-Consumer

Since there are possibly more applications that use InfluxDB other than Grafana, it makes sense to create a consumer application that fetches data from InfluxDB and makes them available for any purposes.

  • There is an InfluxDB consumer Python script, influxDB_consumer.py, in dias_kuksa/utils/cloud/.
  • The script fetches the last data under certain keys from the local InfluxDB server and store them in the corresponding Python dictionary to each key by using the function, storeNewMetricVal. Then you can use the data in the Python dictionary according to your purpose and goals.

Step 4: Simulation

When everything from step 1 to 3 is set up, you can finally test to see whether or not they communicate each other and work in a correct way.

  • If your CAN interface is physical one (e.g., can0), you can use either a simulation tool such as Vector CANalyzer, CANoe and etc, or connect to the actual CAN in a vehicle.

    • When connected to the actual CAN in a vehicle, you can use ssh from your laptop to access your Raspberry-Pi. If your laptop’s OS is Windows, you can simply use Putty. A video tutorial to how to remotely access your Raspberry-Pi with Putty is available here.
  • If your CAN interface is virtual one (e.g., vcan0), you can use canplayer from the can-utils library. Before running canplayer, you need to prepare a proper .log file that should be used as an argument to run canplayer. The .log file that is used here was originally logged with CANalyzer in the .asc format, and converted to the .log format with a Python script.

  • To log CAN traces with CANalyzer and get the .asc file (that should be converted to the .log format later) or get the .log file directly with Raspberry-Pi tools, you can follow Reference: Logging CAN traces.

  • The followings describe how to convert a .asc file to a .log file and simulate CAN with the .log file in your Raspberry-Pi. So that you can verify whether or not your setups function correctly from In-vehicle to Cloud.

asc2log Conversion

Since canplayer from the can-utils library only takes the .log format, the existing .asc file should be converted to the .log format.

  1. Make sure that all KUKSA components from In-vehicle to Cloud have been set up from the previous steps.

  2. canplayer can be run in the same in-vehicle machine (e.g., your Raspberry-Pi). Therefore you should be in your Raspberry-Pi to proceed further.

  3. Navigate to dias_kuksa/utils/canplayer/ where asc2log_channel_separator.py with two .asc files (omitted due to the copyright issue and thus shared on request) are located.

  4. otosan_can0-30092020.asc is logged with CAN channel 0 while otosan_can2-30092020.asc with channel 2 from the Ford Otosan truck.

  5. Since canplayer can not play a .asc file, you have to convert them to the .log format. You can do this conversion with asc2log_channel_separator.py.

  6. As the discription of asc2log_channel_separator.py states, the script not only performs the asc2log conversion but also separates the result by CAN channel in case the target .asc file has traces from more than one CAN channel. If the target .asc file has traces from only one CAN channel, the script would only produces one result .log file.

  7. Prior to running asc2log_channel_separator.py, the can-utils library should be installed first. If you have followed the steps from the beginning, you have already installed this library from can-utils.

  8. To convert otosan_can0-30092020.asc, navigate to dias_kuksa/utils/canplayer/ and command the following:

    $ python3 asc2log_channel_separator.py --asc otosan_can0-30092020.asc --can vcan0
    
  1. As a result from 8, can0_otosan_can0-30092020.log would be created.

Simulation with canplayer

  1. Now that we have the .log file to play, make sure your in-vehicle components are already up and running.
  • Configuring vcan0 with kuksa-val-server.exe and dbcfeeder.py is mandatory, cloudfeeder.py and other cloud-side components are optional here.
  1. To run canplayer with the target .log file, can0_otosan_can0-30092020.log, navigate to dias_kuksa/utils/canplayer/, where the .log file is located, and command the following:

    $ canplayer -I can0_otosan_can0-30092020.log
    
  • You should be able to see signals being updated on both terminals, kuksa-val-server.exe and dbcfeeder.py, as shown in the screenshots below.
_images/canplayer_terminal.png
  • Although the screenshots are taken in an Ubuntu virtual machine for convenience, the environment for this simulation is meant to be Raspberry-Pi.

Reference: Logging CAN traces

It would be tedius if you have to get inside the target vehicle, set up the simulation environment and test everytime there is a new update on your implementation. Which is why having a CAN trace log file is important because it eases the development process. With a log file, you can develop and test your application on your desk without having to be in the vehicle which saves time and energy that otherwise would be spent significantly on setting up the test environment.

Although canplayer from the can-utils library in Raspberry-Pi is compliant only with the .log format, it is recommended to use the Vector tools to get CAN traces since they can provide the traces in a variety of formats such that the traces can be used with not only canplayer but also several other means in different environments. Therefore the two ways, with and without the Vector tools, to log CAN traces in the target vehicle are introduced here.

Option 1: with Vector Tools

Hardware Prerequisites
  • Laptop installed with Vector Software
  • Licensed Vector CANcase (VN1630 is used here)
  • USB Interface for CAN and I/O (comes with CANcase)
  • CAN Cable (D-sub /D-sub) x 1
  • CAN Adapter (Open cable to D-sub) x 1
Software Prerequisites
  • Vector Software (CANalyzer Version 13.0 SP2 is used here)
Logging with Vector Tools
  1. Connect the Vector CANcase to the CAN H/L from an ECU in the vehicle by using the CAN cable and adapter. For this you also need to refer to the ECU hardware specification to get to know which ports of the ECU stand for CAN-high and -low of what number of CAN channel.
  2. Connect the Vector CANcase to your laptop and check if the device manager recognizes the CANcase.
_images/0-device_manager.PNG
  • Because the CANcase used here is Vector VN1630, it shows the exact name of the CANcase.
  1. Run CANalyzer 13.0 SP2.
_images/1-license.PNG
  • The capture shows when your CANcase is properly licensed with CANalyzer PRO 13.0. Press “OK” to proceed.
_images/2-license.PNG
  • The capture shows when your CANcase is not licensed. You can not proceed further in this case.
  1. The first thing you would see in CANalyzer is the “Trace” tab. Here you can see the incoming CAN traces when they are being read.
_images/3-trace.PNG
  1. To synchronize your CANcase with the target vehicle’s baudrate, you have to configure manually in CANalyzer. To do this, switch to the “Configuration” tab.
_images/4-configuration.PNG
  1. When you double-click the CANcase icon, a window named “Network Hardware Configuration” would show up. Select the CAN channel (VN1630: written on the back side of CANcase) that you connected to the CAN ports of the target vehicle and set the baudrate the same as that of the vehicle. Then click “OK”.
_images/5-configuration_baudrate.PNG
  1. To enable the logging function, find the “Logging” box on the right hand side of the configuration tap and double-click the small node on the left.
_images/6-logging.PNG
  • Confirm that the “Logging” box is enabled as the capture below.
_images/7-logging.PNG
  1. To change the destination folder or the result file format, double-click the folder-shaped icon on the right and set them as you prefer.
_images/8-logformat.PNG
  • If you want to use the result for canplayer in Raspberry-Pi, set the result file format as “ASCII Frame Logging (*.asc)”. That way, you can convert your result to the .log format by running asc2log_channel_separator.py that can be found in dias_kuksa/utils/canplayer/.
  1. Make sure everything is properly connected and configured. You can now start logging CAN traces by pressing the “Start” button on the top left hand corner.
_images/9-start.PNG
  • If working correctly, you are supposed to able to see the incoming CAN traces on the “Trace” tab.

Option 2: with Raspberry-Pi and CAN Shield

Hardware Prerequisites
  • Laptop to ssh Raspberry-Pi
  • Raspberry Pi 3 or 4
  • CAN Shield (SKPang PiCan2 or Seeed 2 Channel CAN)
  • CAN Cable (D-sub /D-sub) x 1
  • CAN Adapter (Open cable to D-sub) x 1
Software Prerequisites
  • Network that can be shared by the laptop and Raspberry-Pi (for SSH purpose, you can also use your mobile hotspot.)
  • The can-utils library (can-utils)
Logging with Raspberry-Pi and CAN Shield
  1. Assuming the CAN shield is already attached to Raspberry-Pi, connect the shield to the CAN H/L from an ECU in the vehicle by using the CAN cable and adapter. For this you also need to refer to the ECU hardware specification to get to know which ports of the ECU stand for CAN-high and -low of what number of CAN channel.

  2. SSH Raspberry-Pi using Putty (tutorial).

  3. Once you successfully ssh Raspberry-Pi, you would be on your Raspberry-Pi’s terminal. Install the can-utils library if you haven’t yet:

    $ sudo apt install can-utils
    
  4. Configure the CAN shield.

  5. Make sure everything is properly connected and configured. Assuming the name of the configured CAN interface is can0, command the following:

    $ candump -l can0
    
  • If working correctly, you are supposed to able to see the .log file named with the current time (e.g., candump-2020-10-06_163848.log) in the same directory where the terminal is open.
  1. If you want to stop logging, input ctrl + c and check the result .log file to see if CAN traces have been logged properly.

Future Work

Many implementations and tests have been left for the future due to the limited time but the topic has so much potential to be developed further. The future work concerns the followings:

Contact

Name: Junhyung Ki

Bosch Email: fixed-term.Junhyung.Ki@de.bosch.com

Personal Email: kijoonh91@gmail.com

Student Email: junhyung.ki001@stud.fh-dortmund.de

LinkedIn