Getting Started with DIAS-KUKSA¶
Contents¶
Introduction¶
DIAS (DIagnostic Anti-tampering Systems)¶
Modern vehicles with internal combustion engines are equipped with exhaust aftertreatment systems that drastically reduce the emission of harmful exhaust gases. However, there are companies that offer facilities and services to disable these exhaust aftertreatment systems. In a joint European research and development project, DIAS, we will help prevent or uncover these manipulations.
Eclipse KUKSA¶
- KUKSA is a code-based result of the internationally funded project, APPSTACLE (2017 - 2019).
- An open-source eco-system for the connected vehicle domain.
- It is introduced to establish a standard for car-to-cloud scenarios.
- It improves comprehensive domain-related development activities.
- It opens the market to external applications and service provider.
- It facilitates the use of open-source software wherever possible without compromising security.
- The initial release (0.1.0): 30.09.2019 / The second release (0.2.0): 01.2020
- Implementing the DIAS use-case with KUKSA benefits both parties by enabling the solution to be compliant with any vehicles regardless of OEM-specific standards.
DIAS-KUKSA¶
One objective of DIAS is to create a cloud-based diagnostic system. To manage a large scale of vehicles, sufficient computing power and resource are required. A cloud-based system would not only provide these but also make the entire system easy to scale according to the number of target vehicles by utilizing cloud service providers such as Azure, AWS and Bosch IoT Hub. For the system to be powered by service providers like these, it is essential to establish connectivity between the vehicle-server-based applications and the external-server-based applications. The KUKSA infrastructure offers the means for instituting such connectivity.
The goal of this documentation is to make clear how to set up each infrastructure component according to the case of DIAS in a sequential manner so that readers can have a thorough understanding of how to apply their own implementation on the established connectivity with KUKSA.
DIAS-KUKSA Overall Schema¶
The figure illustrates the entire connectivity cycle from the vehicle to the end-consumer. In the following chapters, how to establish such connectivity is described in detail.
Step 1: Hardware Setup¶
Raspberry-Pi (Data Publisher)¶
For development, you will be using Raspberry-Pi 3 or 4 (preferably 4 since it is faster and has more RAM capacity). Raspberry-Pi is not a regular micro-controller but rather a single-board computer. This means that you can run an OS (Operating System; Raspbian, Ubuntu, etc.) on it, and connect it to other IO devices such as monitor, mouse and keyboard. This way, you can use your Raspberry-Pi in the similar way you use your PC, which eases the entire in-vehicle development process.
- (Hardware Option 1 - Raspberry-Pi) In this documentation, the following hardware and OS are used.
- HW: Raspberry-Pi 4
- OS: Raspberry-Pi OS (32-bit) with desktop / Download
Set up Raspberry-Pi. You can kick-start with your Raspberry-Pi by following this instruction.
When installation is done, open a terminal and install Git on your Raspberry-Pi:
$ sudo apt update $ sudo apt install git $ git --version
- (Hardware Option 2 - Ubuntu VM) If you are only interested in desk development without connecting to the real CAN, you can use a virtual machine as your in-vehicle “hardware”.
CAN Interface for Hardware¶
For your hardware to be interactive with CAN, a CAN interface is required. Since Raspberry-Pi doesn’t have a built-in CAN interface, user has to configure it manually. There are several ways for configuring the interface on Raspberry-Pi and only three options with different purposes are introduced here.
CAN Interface Option 1 - Virtual CAN (Logfile Simulation Purpose)¶
- A virtual CAN interface emulates a physical CAN interface and is capable of behaving nearly identically with less limitations. A virtual CAN interface is appropriate when user just wants to play a CAN log for testing applications in the development phase.
Open a terminal and command:
$ sudo modprobe vcan $ sudo ip link add dev vcan0 type vcan $ sudo ip link set up vcan0
Install
net-tools
to useifconfig
:$ sudo apt install net-tools
Now you should be able to see the interface,
vcan0
, when commanding:$ ifconfig
CAN Interface Option 2 - SKPang PiCan2 (Only for Raspberry-Pi)¶
- SKPang PiCan2 is a shield that provides a physical CAN interface between Raspberry-Pi and the actual CAN bus. A physical CAN interface is required for a field test.
For your Raspberry-Pi to recognize the connected PiCan2, you need to go through a setup process. After physically connecting a PiCan2 shield to your Raspberry-Pi, follow the “Software Installation (p.6)” part of the instruction from Raspberry-Pi.
When installation is done, open a terminal and confirm whether the
can0
interface is present by commanding:$ ifconfig -a
If
can0
is shown, configure and bring the interface up by commanding:$ sudo ip link set can0 up type can bitrate 500000
bitrate
shall be set as the same as the CAN baudrate of the target vehicle.
Now you should be able to see the interface,
can0
, when commanding:$ ifconfig
If you want to bring the interface down, command the following:
$ sudo ip link set can0 down
CAN Interface Option 3 - Seeed 2-Channel Shield (Only for Raspberry-Pi)¶
- Seeed 2-Channel CAN-BUS(FD) Shield serves the same purpose as SKPang PiCan2 does but with two different CAN interfaces. Because a lot of vehicles use more than one CAN channel, it is required to use a dual-channel shield when data from two different CAN channels need to be analyzed in real-time.
- A detailed setup description can be found here.
Get the CAN-HAT source code and install all linux kernel drivers:
$ git clone https://github.com/seeed-Studio/pi-hats $ cd pi-hats/CAN-HAT $ sudo ./install.sh $ sudo reboot
After the reboot, confirm if
can0
andcan1
interfaces are successfully initialized by commanding:$ dmesg | grep spi
You should be able to see output like the following:
[ 3.725586] mcp25xxfd spi0.0 can0: MCP2517 successfully initialized. [ 3.757376] mcp25xxfd spi1.0 can1: MCP2517 successfully initialized.
Open a terminal and double-check whether the
can0
andcan1
interfaces are present by commanding:$ ifconfig -a
5-A. (CAN Classic) If can0
and can1
are shown, configure and bring the interfaces up by commanding:
$ sudo ip link set can0 up type can bitrate 1000000 restart-ms 1000 fd off
$ sudo ip link set can1 up type can bitrate 1000000 restart-ms 1000 fd off
bitrate
shall be set as the same as the CAN baudrate of the target vehicle.
5-B. (CAN FD) If can0
and can1
are shown, configure and bring the interface up by commanding:
$ sudo ip link set can0 up type can bitrate 1000000 dbitrate 2000000 restart-ms 1000 fd on
$ sudo ip link set can1 up type can bitrate 1000000 dbitrate 2000000 restart-ms 1000 fd on
bitrate
shall be set as the same as the CAN baudrate of the target vehicle.
If you want to bring the interface down, command the following:
$ sudo ip link set can0 down $ sudo ip link set can1 down
Linux Machine (Data Consumer)¶
- A data consumer machine is intended to use the data produced by the connected vehicle’s Raspberry-Pi. For development, you can use a virtual machine on your PC that is later expected to be replaceable with a VM instance from cloud service providers to ensure scalability. Please note that it is not required to use virtual machine if the default OS is already Ubuntu.
Set up an Ubuntu virtual machine. A detailed tutorial to how to set up Ubuntu with VirtualBox is explained here.
- The image file used (Ubuntu 18.04 LTS - Bionic Beaver) for this documentation can be downloaded here.
Open a terminal and install Git on Ubuntu:
$ sudo apt update $ sudo apt install git $ git --version
Step 2: In-vehicle Setup¶
- The in-vehicle environment here is Raspberry-Pi 4.
- To ease the complexity, a virtual CAN interface,
vcan0
, is used here. Therefore you should follow CAN Interface Option 1 - Virtual CAN (Logfile Simulation Purpose) prior to this part.
can-utils¶
can-utils
is a Linux specific set of utilities that enables Linux to communicate with the CAN network on the vehicle. The basic tutorial can be found here.
Open a terminal and install
can-utils
:$ sudo apt install can-utils
To test
can-utils
, command the following in the same terminal:$ candump vcan0
candump
allows you to print all data that is being received by a CAN interface,vcan0
, on the terminal.
Open another terminal and command the following:
$ cansend vcan0 7DF#DEADBEEF
cansend
sends a CAN message,7DF#DEADBEEF
to the corresponding CAN interface,vcan0
.
Confirm whether
candump
has received the CAN message. You should be able to see output like the following on the previous terminal:vcan0 7DF [4] DE AD BE EF
kuksa.val Infrastructure¶
Install Git:
$ sudo apt install git
Recursively clone the
kuksa.val
repository:$ git clone --recursive https://github.com/eclipse/kuksa.val.git
Make a folder named
build
inside thekuksa.val
repository folder and navigate tokuksa.val/build/
:$ cd kuksa.val $ mkdir build $ cd build
The following commands should be run before
cmake
to avoid possible errors.
4-1. Install cmake
(version 3.12 or higher) if it hasn’t been installed:
$ sudo apt-get update && sudo apt-get upgrade
Raspberry-Pi:
$ sudo apt install cmake
Ubuntu:
$ sudo snap install cmake --classic
4-2. Install dependencies (Boost libraries, OpenSSL, Mosquitto and more):
$ sudo apt-get install libblkid-dev e2fslibs-dev libboost-all-dev libaudit-dev libssl-dev mosquitto libmosquitto-dev libglib2.0-dev
You can
cmake
now. Navigate tokuksa.val/build/
and command the following:$ cmake ..
Then command
make
in the same directory:$ make
If succeeded, you have successfully built the kuksa.val
infrastructure.
kuksa.val - kuksa.val VSS Server Setup¶
- The
kuksa.val
server is built based on the Genivi VSS (Vehicle Signal Specification) data structure model. The VSS data structure is created according to the JSON file that is put into thekuksa-val-server
executable as an arugment under--vss
(e.g.,vss_rel_2.0.json
). Before we bring up and run thekuksa.val
server, we can create our own VSS data structure in the following steps.
1-1. Recursively clone the GENIVI/vehicle_signal_specification repository:
$ git clone --recurse-submodules https://github.com/GENIVI/vehicle_signal_specification.git
1-2. The name of the cloned repository folder is vehicle_signal_specification
. Inside there is a Makefile
that creates the VSS data structure according to vehicle_signal_specification/spec
. Since we only need a JSON file as output, we can modify Makefile
as follow:
#
# Makefile to generate specifications
#
.PHONY: clean all json
all: clean json
DESTDIR?=/usr/local
TOOLSDIR?=./vss-tools
DEPLOYDIR?=./docs-gen/static/releases/nightly
json:
${TOOLSDIR}/vspec2json.py -i:spec/VehicleSignalSpecification.id -I ./spec ./spec/VehicleSignalSpecification.vspec vss_rel_$$(cat VERSION).json
clean:
rm -f vss_rel_$$(cat VERSION).json
(cd ${TOOLSDIR}/vspec2c/; make clean)
install:
git submodule init
git submodule update
(cd ${TOOLSDIR}/; python3 setup.py install --install-scripts=${DESTDIR}/bin)
$(MAKE) DESTDIR=${DESTDIR} -C ${TOOLSDIR}/vspec2c install
install -d ${DESTDIR}/share/vss
(cd spec; cp -r * ${DESTDIR}/share/vss)
deploy:
if [ -d $(DEPLOYDIR) ]; then \
rm -f ${DEPLOYDIR}/vss_rel_*;\
else \
mkdir -p ${DEPLOYDIR}; \
fi;
cp vss_rel_* ${DEPLOYDIR}/
- Please note that it is recommended to modify the file manually since
Makefile
istab
-sensitive.
1-3. Now we can replace the vehicle_signal_specification/spec
folder with the modified folder. To get the modified spec
folder, clone the junh-ki/dias_kuksa
repository:
$ git clone https://github.com/junh-ki/dias_kuksa.git
1-4. In the directory, dias_kuksa/utils/in-vehicle/vss_structure_example/
, the spec
folder can be found. Replace the existing spec
folder in vehicle_signal_specification/
with the one from dias_kuksa/utils/in-vehicle/vss_structure_example/
. Designing the spec
folder’s file structure can be easily self-explained. The following figure illustrates what the GENIVI data structure would look like when created with the spec
folder.
- By modifying the structure of
spec
folder, a user-specific GENIVI data structure can be created that can be fed ontokuksa-val-server
.
1-5. Before commanding make
, install python dependencies (anytree, deprecation, stringcase) first:
$ sudo apt install python3-pip
$ pip3 install anytree deprecation stringcase pyyaml
1-6. Navigate to the directory, vehicle_signal_specification/
, and command make
to create a new JSON file:
$ make
1-7. As a result, you can get a JSON file named as vss_rel_2.0.0-alpha+006.json
. Let’s rename this file as modified.json
for convenience and move it to kuksa.val/build/src/
, where the kuksa-val-server
executable file is located.
Now we can bring up and run the
kuksa.val
server withmodified.json
. Navigate to the directory,kuksa.val/build/src/
, and command the following:$ ./kuksa-val-server --vss modified.json --insecure --log-level ALL
- The
kuksa.val
server is entirely passive. Which means that you would need supplementary applications to feed and fetch the data.dbcfeeder.py
andcloudfeeder.py
are introduced in the following contents. They are meant to deal with setting and getting the data from thekuksa.val
server.
kuksa.val - dbcfeeder.py Setup¶
kuksa.val/examples/dbc2val/dbcfeeder.py
is to interpret and write the CAN data that is being received by the CAN interface (e.g., can0
or vcan0
) to the kuksa.val
server.
dbcfeeder.py
takes four compulsory arguments to be run:- CAN interface (e.g.,
can0
orvcan0
) /-d
or--device
/ To connect to the CAN device interface. - JSON token (e.g.,
super-admin.json.token
) /-j
or--jwt
/ To have write-access to the server. - DBC file (e.g.,
dbcfile.dbc
) /--dbc
/ To translate the raw CAN data. - Mapping YML file (e.g.,
mapping.yml
) /--mapping
/ To map each of the specific signals to the corresponding path in thekuksa.val
server.
- CAN interface (e.g.,
- Since the
kuksa.val
work package has the admin JSON token already, you only need a DBC file and a YML file. Thejunh-ki/dias_kuksa
repository provides the example DBC file and YML file. (DBC file is target-vehicle-specific and can be offered by the target vehicle’s manufacturer.)
Assuming you have already cloned the
junh-ki/dias_kuksa
repository, / If you haven’t, please clone it now:$ git clone https://github.com/junh-ki/dias_kuksa.git
Navigate to the directory,
dias_kuksa/utils/in-vehicle/dbcfeeder_example_arguments/
, and copydias_mapping.yml
anddias_simple.dbc
(omitted due to the copyright issue and thus shared on request) tokuksa.val/clients/feeder/dbc2val/
wheredbcfeeder.py
is located.Before running
dbcfeeder.py
, install python dependencies (python-can cantools serial) first:$ pip3 install python-can cantools serial websockets
If you haven’t brought up a virtual CAN interface,
vcan0
, please do it now by following CAN Interface Option 1 - Virtual CAN (Logfile Simulation Purpose).Navigate to
kuksa.val/clients/feeder/dbc2val/
, and command the following:$ python3 dbcfeeder.py -d vcan0 -j ../../../certificates/jwt/super-admin.json.token --dbc dias_simple.dbc --mapping dias_mapping.yml
(Optional) If your DBC file follows J1939 standard, please follow Running dbcfeeder.py with j1939reader.py to run
dbcfeeder.py
with J1939.
kuksa.val - cloudfeeder.py Setup¶
dias_kuksa/utils/in-vehicle/cloudfeeder_telemetry/cloudfeeder.py
fetches the data from thekuksa.val
in-vehicle server and preprocesses it with a user-specific preprocessor,dias_kuksa/utils/in-vehicle/cloudfeeder_telemetry/preprocessor_bosch.py
, and transmits the result to Hono (kuksa.cloud - Eclipse Hono (Cloud Entry)) in a form of JSON Dictionary.cloudfeeder.py
takes six compulsory arguments to be run:- JSON token (e.g.,
super-admin.json.token
) /-j
or--jwt
/ To have write-access to the server. - Host URL (e.g., “mqtt.bosch-iot-hub.com”) /
--host
- Protocol Port Number (e.g., “8883”) /
-p
or--port
- Credential Authorization Username (Configured when creating) (e.g., “{username}@{tenant-id}”) /
-u
or--username
- Credential Authorization Password (Configured when creating) (e,g., “your_pw”)/
-P
or--password
- Server Certificate File (MQTT TLS Encryption) (e.g., “iothub.crt”) /
-c
or--cafile
- Data Type (e.g., “telemetry” or “event”) / “-t” or “–type”
- Host URL (e.g., “mqtt.bosch-iot-hub.com”) /
- JSON token (e.g.,
- (Optional)
preprocessor_bosch.py
is designed to follow Bosch’s diagnostic methodologies. Therefore you can create your ownpreprocessor_xxx.py
or modifypreprocessor_example.py
that replacespreprocessor_bosch.py
to follow your own purpose. Of course, the corresponding lines incloudfeeder.py
should be modified as well in this case. - Navigate to
dias_kuksa/utils/in-vehicle/cloudfeeder_telemetry/
, copycloudfeeder.py
andpreprocessor_example.py
tokuksa.val/clients/vss-testclient/
, where thetestclient.py
file is located. - Then the
do_getValue(self, args)
function fromkuksa.val/clients/vss-testclient/testclient.py
should be modified as below.
...
def do_getValue(self, args)::
"""Get the value of a parameter"""
req = {}
req["requestId"] = 1234
req["action"]= "get"
req["path"] = args.Parameter
jsonDump = json.dumps(req)
self.sendMsgQueue.put(jsonDump)
resp = self.recvMsgQueue.get()
# print(highlight(resp, lexers.JsonLexer(), formatters.TerminalFormatter()))
self.pathCompletionItems = []
datastore = json.loads(resp)
return datastore
...
Due to its dependency on the cloud instance information, you should create either a Eclipse Hono or a Bosch-IoT-Hub instance first by following kuksa.cloud - Eclipse Hono (Cloud Entry), so that you can get the required information for running
cloudfeeder.py
ready.Download the server certificate here and place it to
kuksa.val/clients/vss-testclient/
, where thecloudfeeder.py
file is located.Before running
cloudfeeder.py
, install dependencies (mosquitto
,mosquitto-clients
, fromapt
andpygments
,cmd2
frompip3
) first:$ sudo apt-get update $ sudo apt-get install mosquitto mosquitto-clients $ pip3 install pygments cmd2
When all the required information is ready, navigate to
kuksa.val/clients/vss-testclient/
, and runcloudfeeder.py
by commanding:$ python3 cloudfeeder.py -j {admin_json_token} --host {host_url} -p {port_number} -u {auth-id}@{tenant-id} -P {password} -c {server_certificate_file} -t {transmission_type}
- Just a reminder, the information between
{}
should be different depending on the target Hono instance. You can follow kuksa.cloud - Eclipse Hono (Cloud Entry) to create a Hono instance. admin_json_token
can be found under the directory (kuksa.val/certificates/jwt/super-admin.json.token
). Therefore,/../certificates/jwt/super-admin.json.token
should be entered for-j
when considering the current directory iskuksa.val/clients/vss-testclient/
.- If you have successuly made it here, you would be able to see
cloudfeeder.py
fetching and transmitting the data every 1~2 seconds by now.
DIAS Extension: SAE J1939 Option¶
Introduction to SAE J1939¶
Society of Automotive Engineers standard SAE J1939 is the vehicle bus recommended practice used for communication and diagnostics among vehicle components. Originating in the car and heavy-duty truck industry in the United States, it is now widely used in other parts of the world. SAE J1939 is a higher-layer protocol (e.g., an add-on software) that uses the CAN Bus technology as a physical layer. In addition to the standard CAN Bus capabilities, SAE J1939 supports node addresses, and it can deliver data frames longer than 8 bytes (in fact, up to 1785 bytes).
Since DIAS’s demonstrator vehicle is a Ford Otosan truck that follows SAE J1939 standard, it is necessary for KUKSA to adapt the standard.
The normal DBC file is used to apply identifying names, scaling, offsets, and defining information, to data transmitted within a CAN frame. The J1939 DBC file is designed to serve the same purposes but aiming at data transmitted within a Parameter Group Number (PGN) unit. This is due to the fact that some data frames are delivered in more than one CAN frame depending on the PGN’s data length in J1939.
To simply put, one can take a look at one PGN example. The following PGN-65251 information is captured in the official SAE J1939-71 documentation revised in 2011-03 (PDF Download Link).
PGN-65251 defines “Engine Configuration 1 (EC1)” and consists of 39 bytes as stated in “Data Length”. This means that to receive the complete information of PGN-65251, at least 6 CAN frames are required when considering the length of a single CAN frame is 8 bytes:
- A Transfer Protocol Broadcast Announce Message (TP.BAM) is used to inform all the nodes (e.g., Raspberry-Pi) of the network that a large message is about to be broadcast and defines the parameter group (The Target PGN) and the number of total packets to be sent. After TP.BAM is sent, a set of TP.DT messages are sent at specific time intervals.
- A Transfer Protocol Data Transfer (TP.DT) is an individual packet of a multipacket message transfer. It is used for the transfer of data associated with parameter groups that have more than 8 bytes of data (e.g., PGN-65251: 39 bytes).
For example, one TP.BAM and three TP.DT messages would be sent to deliver a parameter group that has more than 20 bytes (PGN-65260) as illustrated below:
There are a lot of concepts defined in the SAE J1939 documentation that are required to conform the J1939 transport protocol. One can look into the documentation to understand the concepts in depth. However, the general premise is simple: Raw CAN frames are processed to produce PGN data that should be decoded into CAN signals consumed by an in-vehicle application. Having said that, finding an existing J1939 library that can convert raw CAN frames to PGN data should be the first step. Since dbcfeeder.py
is written in Python, it makes sense to choose the library written in the same language.
The Python J1939 package converts raw CAN frames to PGN data and makes the data available for use. The following figures compare two scenarios where dbcfeeder.py
reads CAN signals without and with J1939.
Without J1939, dbcfeeder.py
receives decoded CAN singals through dbcreader.py
that reads raw CAN frames directly from a CAN interface (e.g., can0
or vcan0
).
With J1939, dbcfeeder.py
receives decoded CAN singals through j1939reader.py
(source) that reads PGN messages from the j1939.ElectronicControlUnit
(ECU) class of the python j1939
package that converts raw CAN frames to PGN data.
The j1939.ControllerApplications
(CA) class from the python j1939
package is a superclass of j1939Reader.J1939Reader
and utilizes the ECU class’s functionalities to derive PGN data.
At the time of writing this documentation, the following features are available from the python j1939
package according to here:
- One ElectronicControlUnit (ECU) can hold multiple ControllerApplications (CA)
- ECU (CA) Naming according SAE J1939/81
- Full support of transport protocol according SAE J1939/21 for sending and receiveing
- Message Packaging and Reassembly (up to 1785 bytes)
- Transfer Protocol Transfer Data (TP.DT)
- Transfer Protocol Communication Management (TP.CM)
- Multi-Packet Broadcasts
- Broadcast Announce Message (TP.BAM)
Implementation to j1939reader.py¶
A sophisticated example of j1939.ControllerApplication
that receives PGN messages from j1939.ElectronicControlUnit
is already introduced here as OwnCaToProduceCyclicMessages
. When running the OwnCaToProduceCyclicMessages
class and a J1939 CAN log file together, the following messages can be shown on the OwnCaToProduceCyclicMessages
’s terminal.
As shown above, each line prints out the number and the length of a PGN that has been read. These messages are produced from a callback function called OwnCaToProduceCyclicMessages.on_message
.
As already mentioned, the general premise is that Raw CAN frames are processed to produce PGN data that should be decoded into CAN signals consumed by an in-vehicle application. Here we can divide the premise into three requirements:
- Getting PGN data
- Decoding PGN data into CAN signals
- Getting the decoded CAN signals available on the target in-vehicle application (e.g.,
dbcfeeder.py
)
It is already possible to receive PGN data through OwnCaToProduceCyclicMessages
(code). Also, some parts of dbcreader.py
(code) can be reused for getting the decoded signals ready for the in-vehicle application.
j1939reader.py in dbcfeeder.py¶
1. dbcfeeder.py without J1939¶
In the case without J1939, dbcfeeder.py
imports dbcreader.py
and passes the required arguments when creating an instance of dbcreader.DBCReader
. Then the dbcreader.DBCReader
instance starts a thread by running start_listening()
and receiving CAN frames through its connected CAN interface (cfg['can.port']
).
2. dbcfeeder.py with J1939¶
Likewise, in the case with J1939, dbcfeeder.py
imports j1939reader.py
instead of dbcreader.py
and passes the required arguments when creating an instance of j1939reader.J1939Reader
. Then the j1939reader.J1939Reader
instance starts a thread by running start_listening()
and receiving PGN data through a j1939.ElectronicControlUnit
instance that is connected to the passed CAN interface (cfg['can.port']
).
Decoding PGN Data with j1939reader.py¶
j1939reader.py
(code) reuses OwnCaToProduceCyclicMessages
and dbcreader.py
for the requirement A and C with the add-on PGN decode functionality for the requirement B that is closely explained in the following.
1. Function: start_listening¶
start_listening
creates a j1939.ElectronicControlUnit
instance and connects it to the passed CAN interface (cfg['can.port']
). Then the ECU instance adds the current j1939reader.J1939Reader
(precisely, j1939.ControllerApplication
inherited by j1939reader.J1939Reader
) instance and starts a thread of it. After running start_listening
, the ECU instance can start reading raw CAN frames from the connected CAN interface, convert them into PGN data and send the result to a callback function, on_message
, of the j1939reader.J1939Reader
instance.
2. Function: on-message¶
The callback function, on_message
, receives PGN data and finds a corresponding CAN message in self.db
by running identify_message
. If the return value of identify_message
is not None
, it means that the observed PGN has the corresponding message and thus it iterates the list of signals of the message and decodes each signal and puts the result in self.queue
by running put_signal_in_queue
.
3. Function: identify_message¶
identify_message
examines the database instance (self.db
) that has been built with the passed DBC file (cfg['vss.dbcfile']
) to get a message (cantools.database.can.Message
) that corresponds to the observed PGN. Because PGN is the only available parameter that can identify what parameter group a CAN message is intended for, understanding how a CAN frame (especially CAN-ID) is structured is important so that the application can compare the observed PGN to a comparison message’s ID to confirm whether or not they match.
In the case of PGN-61444 (Electronic Engine Controller 1 / EEC1), it is (0x)f004
when 61444
is converted to hex. Therefore, identify_message
should find a CAN message with an ID that contains f004
among the messages from self.db
. The IDs of all messages in self.db
are determined based on the passed DBC file (cfg['vss.dbcfile']
). The following image (source) shows how a J1939 DBC file looks like.
The needed information in the above image is CAN ID: 2364540158
. It is (0x)8CF004FE
When 2364540158
is converted to hex. To understand what exactly (0x)8CF004FE
indicates, one can refer to the following image that explains the J1939 message format.
As described above, CAN ID consists of 29 bits in J1939. To express the value on a bit level, the binary conversion needs to be applied to (0x)8CF004FE
, making it (0b) 1000 1100 1111 0000 0000 0100 1111 1110
. With this, the following information can be derived.
ID Form | Correponding Value of ECC1 |
---|---|
PGN | 61444 |
PGN in hex | (0x) f004 |
PGN in binary | (0b) 1111 0000 0000 0100 |
DBC ID | 2364540158 |
DBC ID in hex | (0x) 8cf004fe |
DBC ID in binary | (0b) 1000 1100 1111 0000 0000 0100 1111 1110 |
Since the number of binary numbers is 32 (bits) making it bigger than 29 (bits), the first three binary numbers are omitted: (0b) 0 1100 1111 0000 0000 0100 1111 1110
. With this and the message format image, the folloiwng information can be derived from the ECC1 message ID.
J1939 Message Info | Binary | Decimal | Hex |
---|---|---|---|
3 Bit Priority | (0b) 0 11(00) |
3 |
(0x) c |
18 Bit PGN | (0b) (00) 1111 0000 0000 0100 |
61444 |
(0x) f004 |
8 Bit Source Address | (0b) 1111 1110 |
254 |
(0x) fe |
As shown above, the decimal value of ECC1 message ID’s PGN is the same as 61444
which means that it is possible to confirm whether or not one of the CAN messages in self.db
has the same value of PGN as that of the observed PGN. identify_message
converts the observed PGN into a hex value and compare the value to the hex PGN value of each message in self.db
. If the hex value of the observed PGN matches with that of the comparison message’s PGN, it means that the comparison message is what the observed PGN indicates and thus the message is returned.
4. Function: put_signal_in_queue¶
Once the target message is returned by identify_message
, on_message
iterates the list of signals in the returned message and puts each signal (cantools.database.can.Signal
) with its calculated value in the queue (self.queue
) that would later be used to feed kuksa-val-server
by running put_signal_in_queue
. In put_signal_in_queue
, there are two scenarios. One is when the type of data is “list”, and the other is when the type of data is “bytearray” as shown below.
In the scenario where the data type is “list”, the size of data is more than a CAN frame’s maximum payload of 8 bytes (e.g., 39 bytes with PGN-65251) in which case data comes in a form of a list of decimal numbers. In this case, the start byte and the length of data should be calculated as each number represents a byte’s decimal value and the data access is done on 1 byte basis. For example, if the DBC file describes that the observed signal’s start bit is 16
(It starts from 0
in DBC files) and its length is 16
, it means that the number of start byte is 2
(starts from 0
) and the length of data is 2
. Which means that the third and fourth numbers in the list express the observed signal’s value. With this information, decode_signal
calculates the value of the observed signal with other attributes described by the DBC file.
In the other scenario where the data type is “bytearray”, the size of data is 8 bytes. In this case, the data access is done on 1 bit basis and the start bit and data length can be used without any processing as they are based on a bit level. With this information, decode_byte_array
directly calculates the value of the observed signal with other attributes described by the DBC file.
Once the value is calculated, it checks whether the calculated value is bigger than the signal’s maximum or its minimum. If the value is out of the allowed scope of the signal, it is changed either to minimum or maximum before it is passed to the queue (self.queue
).
- One can refer to here to find out all the available attributes from
cantools.database.can.Signal
. This also depends on the target DBC file.
5. Function: decode_signal¶
decode_signal
is to calculate the value of the observed signal when the data access level is on a byte level in which case data comes in a form of a list of decimal numbers. If the number of bytes (data length) is equal to 1
, the raw value can be directly extracted from data with the start byte number and the value of the signal can be calculated as follow:
(Source)
If the number of bytes (data length) is equal to 2
, this means that two decimal numbers have to be aggregated to calculate the value of the signal which is done by running decode_2bytes
.
6. Function: decode_2bytes¶
decode_2bytes
calculates the value of the observed signal when the signal is decribed with two bytes. Because each decimal number in the list can be converted to hex (e.g., 16 = 0x0f
) representing a byte, the aggregation of two decimal numbers is done after coverting them to hex.
As described above, the aggregation depends on the byte order that is either “little_endian” or “big_endian”. According to here In J1939, the payload is encoded in the “little_endian” order from byte 0 to byte 7 while the bits in every byte are in the “big_endian” order as decribed in the table below.
Bytes | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
Bits | 7..0 | 15..8 | 23..16 | 31..24 | 39..32 | 47..40 | 55..48 | 63..56 |
To get a raw value out of two hex numbers, they need to be arranged in the “big_endian” order before decimal conversion. Since the bits in every byte are already in the “big_endian” order, changing the order in a bit level is not required in any case. Therefore, in the case of “little_endian”, the start byte comes at the end whereas it comes at the beginning with “big_endian” which is highly unlikely in J1939, and the order of bits in each byte remains the same. Once the numbers are merged in a form of a hex number, the merged hex number is once again converted to decimal to describe the raw value. Then the same formula used in decode_signal
is applied to calculate the result value.
7. Function: decode_byte_array¶
decode_byte_array
is to calculate the value of the observed signal when the data access level is on a bit level in which case data comes in a form of a bytearray. As explained in decode_2bytes
, the payload is encoded in the same way that bytes are in the “little_endian” order and the bits in every byte are in the “big_endian” order. If the byte order is “little_endian”, the bytearray is reversed first and then converted to a list of bits by running byteArr2bitArr
to produce a binary string that is later converted to integer to get the raw value. Otherwise the same process is done but without reversing the bytearray which is highly unlikely in J1939. In any case, changing the order on a bit level is not required as well.
Running dbcfeeder.py with j1939reader.py¶
Clone the
junh-ki/dias_kuksa
repository:$ git clone https://github.com/junh-ki/dias_kuksa.git
Navigate to
dias_kuksa/utils/in-vehicle/j1939feeder/
and copyj1939reader.py
and paste it tokuksa.val/clients/feeder/dbc2val/
wheredbcfeeder.py
is located.Install J1939 Python dependency:
$ pip3 install j1939
Come back to the
Home
directory and install the wheel-package:$ cd $ git clone https://github.com/benkfra/j1939.git $ cd j1939 $ pip install .
In
dbcfeeder.py
(kuksa.val - dbcfeeder.py Setup), any line that involves withdbcreader.py
should be replaced to work withj1939reader.py
.
5-1. Import part:
# import dbcreader
import j1939reader
5-2. Reader class instance creation part:
# dbcR = dbcreader.DBCReader(cfg,canQueue,mapping)
j1939R = j1939reader.J1939Reader(cfg,canQueue,mapping)
5-3. start_listening
function part:
# dbcR.start_listening()
j1939R.start_listening()
Make sure
kuksa-val-server
is up and running and a CAN interface (vcan0
orcan0
) is configured before runningdbcfeeder.py
.Navigate to
kuksa.val/clients/feeder/dbc2val/
wheredbcfeeder.py
is located, and command the following:$ python3 dbcfeeder.py -d vcan0 -j ../../../certificates/jwt/super-admin.json.token --dbc dias_simple.dbc --mapping dias_mapping.yml
- The following screenshots show what values are stored in
kuksa-val-server
at the end of playing log files (can0_otosan_can0-30092020
andcan0_otosan_can2-30092020
).
In the normal case, dbcfeeder.py
is not able to read EngRefereneceTorque
, EngSpeedAtIdlePoint1
and EngSpeedAtPoint2
. These three signals belong to PGN-65251 (Engine Configuration 1 / J1939) and are delivered with a TP.BAM
with multitple TP.DT
messages since the size of the message is bigger than 8 bytes (size of 1 CAN frame = 8 bytes). Also, the value of Aftertreatment1IntakeNOx
is 3076.75
which is not correct considering it is bigger than the signal’s maximum value in the DBC file as shown below.
Now not only dbcfeeder.py
with j1939reader.py
is able to read these signals but also the value of Aftertreatment1IntakeNOx
appears to be at the signal’s maximum and the other signals’ values are different from the case without J1939 as shown above. This is due to the fact that dbcfeeder.py
has followed the J1939 standard when reading signals from CAN and all the values here are valid as they appear within their designated scope in the DBC file.
Step 3: Cloud Setup¶
Deployment Option 1 - Manual¶
kuksa.cloud - Eclipse Hono (Cloud Entry)¶
Eclipse Hono provides remote service interfaces for connecting large numbers of IoT devices to a back end and interacting with them in a uniform way regardless of the device communication protocol.
Bosch IoT Hub as Hono¶
The Bosch IoT Hub comprises open source components developed in the Eclipse IoT ecosystem and other communities, and uses Eclipse Hono as its foundation. Utilizing Hono is essential to deal with a large amount of connected vehicles due to its scalability, security and reliability. The Bosch IoT Hub is available as a free plan for evaluation purposes. The following steps describe how to create a free Bosch IoT Hub instance.
- If you don’t have a Bosch ID, register one here and activate your ID through the registered E-Mail.
- Go to the main page and click “Sign-in” and finish signing-up for a Bosch IoT Suite account. Then you would be directed to the “Service Subscriptions” page.
- In the “Service Subscriptions” page, you can add a new subscription by clicking “+ New Subscription”. Then it would direct you to Product Selection Page that shows you what services can be offered. Choose “Bosch IoT Hub”.
- Then select “Free Plan” and name your Bosch IoT Hub instance. The name should be unique (e.g.,
kuksa-tut-jun
) and click “Subscribe”. - After that, you would see your subscription details. Click “Subscribe” again to finish the subscription process.
- Now you would be in Service Subscriptions Page. It would take a minute or two for your instance to change its status from “Provisioning” to “Active”. Make sure the status is “Active” by refreshing the page.
- When the status is “Active”, click “Show Credentials” of the target instance. Then it would show the instance’s credentials information. This information is used to go to the device registry and register your device in the further steps. (You don’t need to save this information since you can always come back to see.) Let’s copy and save the values of “username” and “password” keys under “device_registry” somewhere.
- Now go to Bosch IoT Hub - Management API. The Management API is used to interact with the Bosch IoT Hub for management operations. This is where you can register a device on the Bosch IoT Hub instance you’ve just created and get the tenant configuration that you would ultimately use as input arguments when running
cloudfeeder.py
(kuksa.val - cloudfeeder.py Setup) for a specific device (e.g., Raspberry-Pi of a connected vehicle).
8-1. Click “Authorize” and paste the “username” and “password” that you copied in 7, then click “Authorize”. If successfully authorized, click “Close” to close the authorization window.
8-2. Under the “devices” tab, you can find the “POST” bar. This is to register a new device. Click the tab and then “Try it out” to edit. Copy and paste the tenant-id of the Bosch IoT Hub instance to where it is intended to be placed.
8-3. Under “Request body”, there would be a JSON dictionary like the following:
{
"device-id": "4711",
"enabled": true
}
You can rename the string value of “device-id” according to your taste:
{
"device-id": "kuksa-tut-jun:pc01",
"enabled": true
}
8-4. Then click “Execute”. If the server responses with a code 201, it means the device is successfully registered. If you click “Execute” with the same JSON dictionary again, it would return a code 409. Which means you have tried to register the same device again so it wouldn’t register it due to the conflict with the existing one. However, if you change “device-id” to something new and click “Execute”, then it would return a code 201 because you have just registered a new device name.
- Just like this, you can register up to 25 devices with a free plan Bosch IoT Hub instance. This means that 25 vehicles or any other IoT devices can be connected to this one Bosch IoT Hub instance and each and every one of them interacts with the instance through a unique “device-id”.
- To list all the registered devices’ ids, you can click the “GET /registration/{tenant-id}” bar, type the instance’s tenant-id and click “Execute”. If successful, the server would return a code 200 with the device data that lists all the devices that are registered to the instance.
9. What we have done so far is, create a Bosch IoT Hub instance and register devices in it. However, we haven’t yet configured credentials for each device. Credential information helps you access to a specific device that is registered in the instance. The following steps illustrate how to add new credentials for a device.
9-1. Under the “credentials” tab, find and click the “POST” bar.
9-2. Click “Try it out” and paste the tenant-id of the Bosch IoT Hub instance to where it is intended to be placed.
9-3. In the JSON dictionary, change the value of “device-id” to the target device-id’s value.
9-4. Set values of “auth-id” and “password” according to your preference:
{
"device-id": "kuksa-tut-jun:pc01",
"type": "hashed-password",
"auth-id": "pc01",
"enabled": true,
"secrets": [
{
"password": "kuksatutisfun01"
}
]
}
If the server responses with a code 201, it means that new credentials have been added successfully.
- Here the values of “auth-id” and “password” are used to run
cloudfeeder.py
. Therefore it is recommended to save them somewhere.
9-5. Now we have all information to run cloudfeeder.py
:
- Host URL: “mqtt.bosch-iot-hub.com”
- Protocol Port Number: “8883”
- Credential Authorization Username (e.g., “{auth-id}@{tenant-id}”): “pc01@td23aec9b9335415594a30c7113f3a266”
- Credential Authorization Password: “kuksatutisfun01”
- Server Certificate File: “iothub.crt”
- Data Type: “telemetry”
With the information in 9-5 (should be different in your case), we can run
cloudfeeder.py
(kuksa.val - cloudfeeder.py Setup). Navigate tokuksa.val/vss-testclient/
and command:$ python3 cloudfeeder.py --host mqtt.bosch-iot-hub.com -p 8883 -u pc01@td23aec9b9335415594a30c7113f3a266 -P kuksatutisfun01 -c iothub.crt -t telemetry
kuksa.cloud - InfluxDB (Time Series Database)¶
Now that we have set up a Hono instance, cloudfeeder.py
can send the telemetry data to Hono every one to two seconds. Hono may be able to collect all the data from its connected vehicles. However, Hono is not a database, meaning that it doesn’t store all the collected data in itself. This also means that we have to hire a time series database manager that can collect and store the data received by Hono in chronological order.
InfluxDB is another kuksa.cloud’s component, that is an open-source time series database. In KUKSA, InfluxDB is meant to be used as the back-end that stores the data incoming to Hono. With InfluxDB, we can make use of the collected data not only for visualization but also for a variety of external services such as a mailing service or an external diagnostic service. InfluxDB should be located in the northbound of Hono along with Hono-InfluxDB-Connector that should be placed in-between Hono and InfluxDB.
- To set up InfluxDB and Hono-InfluxDB-Connector, we can use a Linux machine (Linux Machine (Data Consumer)). Based on Hono, the Linux machine here can be considered as a data consumer while the in-vehicle Raspberry-Pi is considered as a data publisher.
- The following steps to setup InfluxDB is written based on this tutorial.
VirtualBox with Ubuntu 18.04 LTS is used here for setting up InfluxDB and Hono-InfluxDB-Connector. (VM Setup Tutorial can be found here.) (If your default OS is already Linux, this step can be skipped.)
Run your Virtual Machine (VM) and open a terminal.
Before InfluxDB installation, command the following:
$ sudo apt-get update $ sudo apt-get upgrade $ sudo apt install curl $ curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add - $ source /etc/lsb-release $ echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
Then install InfluxDB:
$ sudo apt-get update && sudo apt-get install influxdb
Start InfluxDB:
$ sudo service influxdb start
- If there is no output produced from this command, you have successfully set up InfluxDB on your VM. Please continue with 6 if you want to know how to interact with InfluxDB through a Command Line Interface (CLI). Otherwise, you can directly move onto Hono-InfluxDB-Connector (dias_kuksa - Hono-InfluxDB-Connector).
Connect to InfluxDB by commanding:
$ influx
- After this command, you would be inside the InfluxDB shell.
Create a database, “kuksademo”, by commanding inside the InfluxDB shell:
> CREATE DATABASE kuksademo
- This command produces no output, but when you list the database, you should see that it was created.
List the database by commadning inside the InfluxDB shell:
> SHOW DATABASES
Select the newly created database, “kuksademo”, by commanding inside the InfluxDB shell:
> USE kuksademo
- It should produce the following output on the terminal: “Using database kuksademo”
Insert some test data using the following command:
> INSERT cpu,host=serverA value=0.64
- More information about inserting data can be found here
The insert command does not produce any output, but you should see your data when you perform a query:
> SELECT * from cpu
Type “exit” to leave the InfluxDB shell and return to the Linux shell:
> exit
(Optional) If you want to write test data from the Linux shell, you can run the following one line script:
$ while true; do curl -i -XPOST 'http://localhost:8086/write?db=kuksademo' --data-binary "cpu,host=serverA value=`cat /proc/loadavg | cut -f1 -d ' '`"; sleep 1; done
- This command will write data to the
kuksademo
database every 1 second.
You can verify if data is being sent to InfluxDB by using the influx shell and running a query:
> influx > USE kuksademo > SELECT * FROM cpu
dias_kuksa - Hono-InfluxDB-Connector¶
Now that Hono and InfluxDB are set up, we need a connector application to transmit the incoming data from Hono to InfluxDB. cloudfeeder.py
produces and sends Hono the result telemetry messages in a form of JSON dictionary. Therefore the connector application should be able to read the JSON dictionary from Hono, map the dictionary to several individual metrics and send them to InfluxDB by using the curl
command.
- Since the messaging endpoint of Hono (Bosch IoT Hub) follows the AMQP 1.0 protocol, the connector application should also be AMQP based.
- An AMQP Based connector application can be found in
dias_kuksa/utils/cloud/maven.consumer.hono
from thejunh-ki/dias_kuksa
repository. The application is written based oniot-hub-examples/example-consumer
from thebosch-io/iot-hub-example
respoitory.
To set up the connector, you have to clone the
junh-ki/dias_kuksa
repository on your machine first:$ git clone https://github.com/junh-ki/dias_kuksa.git
Navigate to
dias_kuksa/utils/cloud/maven.consumer.hono
and checkREADME.md
. As stated inREADME.md
, there are three prerequisites to be installed before running this application.
2-1. Update the system:
$ sudo apt update
$ sudo apt upgrade
2-1. Install Java (OpenJDK 11.0.8):
$ sudo apt install openjdk-11-jre-headless openjdk-11-jdk-headless
$ export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/
$ echo $JAVA_HOME
2-2. Install Maven (Apache Maven 3.6.0):
$ sudo apt install maven
$ mvn --version
2-3. Install mosquitto-clients:
$ sudo apt install mosquitto-clients
2-4. Install curl:
$ sudo apt install curl
Navigate to
dias_kuksa/utils/cloud/maven.consumer.hono/
and command the following:$ mvn clean package -DskipTests
- This command compiles the
src
folder with Maven and produces thetarget
folder that contains a .jar formatted binary file,maven.consumer.hono-0.0.1-SNAPSHOT.jar
.
Now that you have the binary file, you can execute the connector application. In the same directory,
dias_kuksa/utils/cloud/maven.consumer.hono/
, command the following:$ java -jar target/maven.consumer.hono-0.0.1-SNAPSHOT.jar --hono.client.tlsEnabled=true --hono.client.username={messaging-username} --hono.client.password={messaging-password} --tenant.id={tenant-id} --device.id={device-id} --export.ip={export-ip}
- (Bosch IoT Hub) The corresponding info (messaging-username, messaging-password, tenant-id, device-id) can be found in Service Subscriptions Page.
- If
InfluxDB
is deployed manually,export-ip
shall be set to:localhost:8086
. - The startup can take up to 10 seconds. If you are still running
cloudfeeder.py
, the connector application should print out telemetry messages on the console.
- (Optional) If you want to change the way the connector application post-processes telemetry messages, you can modify
ExampleConsumer.java
that can be found in the directory:dias_kuksa/utils/cloud/maven.consumer.hono/src/main/java/maven/consumer/hono/
.
- The method,
handleMessage
, is where you can post-process. - The
content
variable is where the received JSON dictionary string is stored. - To seperate the dictionary into several metrics and store them in a map, the
mapJSONDictionary
method is used. - Each metric is stored in a variable individually according to its type and sent to the InfluxDB server through the
curlWriteInfluxDBMetrics
method. - You can add the post-processing part before
curlWriteInfluxDBMetrics
if necessary.
kuksa.cloud - Grafana (Visualization Web App)¶
So far we have successfully managed to set up Hono and InfluxDB, and transmit data incoming to Hono to InfluxDB by running Hono-InfluxDB-Connector. Now our concern is how to visualize the data inside InfluxDB. One way to do this is to use Grafana.
Grafana is a multi-platform open source analytics and interactive visualization web application. The idea here is to get Grafana to read InfluxDB and visualize the read data.
- The installation steps to setup Grafana is written based on here.
To install Grafana (stable version 2.6) on your VM, run following commands:
$ sudo apt-get install -y apt-transport-https $ sudo apt-get install -y software-properties-common wget $ wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add - $ echo "deb https://packages.grafana.com/oss/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list $ sudo apt-get update $ sudo apt-get install grafana
Start Grafana service:
$ sudo service grafana-server start
If this command doesn’t work, list PIDs on port 3000 (Grafana uses port 3000) to see whether grafana-server is already running on one of them:
$ sudo apt install net-tools $ sudo netstat -anp tcp | grep 3000
assuming the PID number is: 13886:
$ sudo kill 13886 $ sudo service grafana-server start
Check whether the Grafana instance is running:
$ sudo service grafana-server status
ctrl
+c
to get out.
Now that the Grafana server is running on your machine, you can access to the server by using a web-browser. Open a browser and access to the following address:
http://localhost:3000/
Log in with the admin account:
Email or username: admin Password: admin
After logging in, click “Configuration” on the left, click “Add data source” and select “InfluxDB”.
Then you would be in the InfluxDB Settings page. Go to “HTTP” and set URL as follow:
URL: http://localhost:8086
Then go to “IndluxDB Details”. Here we are going to select the “kuksademo” database that we have created to test InfluxDB. You can also choose another database that Hono-InfluxDB-Connector has been sending data to. To choose “kuksademo”, enter in the following information:
Database: kuksademo User: admin Password: admin HTTP Method: GET
Click “Save & Test”. If you see the message, “Data source is working”, it means that Grafana has been successfully connected to InfluxDB.
Now you can create a new dashboard. Click “Create” on the left and click “Add new panel”.
Then you would be in the panel editting page. You can choose what metrics you want to analyze. This depends entirely on what metrics you have been sending IndluxDB. Since the metrics we have created in “kuksademo” is
cpu
, you can set the following information:FROM:
default
cpu
Click “Apply” on the upper right. Now a new dashboard with a panel has been created, you can change the time scope, refresh or save the dashboard on the top.
- In the same way, you can create multiple panels in the dashboard for different metrics.
Deployment Option 2 - Docker Compose¶
Deployment Option 1 - Manual has been introduced to understand what kinds of cloud components are used for kuksa.cloud
and how to configure them so that they can interact with each other. However, deploying each and every cloud component, configuring them, setting a data source for Grafana
and designing a dashboard of it manually is not plausible when considering a huge number of connected vehicles. This is where container technology like Docker comes into play. A couple of key concepts are described below:
- Docker Container: A standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
- Docker Compose: A tool for defining and running serveral Docker containers. A YAML file is used to configure the application’s services.
- Kubernetes: One difference between Docker Compose and Kubernetes is that Docker Compose runs on a single host, whereas Kubernetes is for running and connecting containers on multiple hosts.
The key point of using Docker is to facilitate automation so that users can deploy the applications in an agile and efficient way. To learn all the concepts and basics of Docker and be familiar with them, you can follow this tutorial. The subsequent contents are written based on the assumption that readers are familiar with Docker.
In the case of DIAS-KUKSA, there are two deployment options that utilize Docker:
- Docker Compose
- Azure Kubernetes Service(AKS)
When deploying with Docker Compose, it is assumed that a Bosch-IoT-Hub
instance is already up and running. Therefore the deployment only includes: Hono-InfluxDB-Connector
, InfluxDB
and Grafana
. Docker Compose runs only on a single host (a single Ubuntu machine). Even though it can only take care of a single connected vehicle, deploying with Docker Compose can be advantageous because it eases development process by reducing time and effort spent on setting deployment configuration for each application and creating the identical Grafana
dashboard. Therefore Docker Compose deployment can be applicable for deveopment, test and evaluation purposes.
On the other hand, AKS includes all the cloud components (Eclipse Hono
, Hono-InfluxDB-Connector
, InfluxDB
and Grafana
) and runs on multiple hosts, meaning that it can be highly advantageous for commercial distribution that deals with a large amount of data transference involving with a number of connected vehicles. The downside of using AKS is that it costs money since the service is offered by Microsoft Azure and also the deployment configuration is more intricate. Therefore using AKS would be more favorable for commercial distribution rather than a development purpose.
In this part, Docker Compose deployment is closely covered. The contents include:
- How to install Docker and Docker Compose
- How to modify the
Hono-InfluxDB-Connector
Docker image.- How to set data sources and dashboards on
Grafana
’s according to your use-case.- How to setup
docker-compose.yml
for the KUKSA cloud components (Hono-InfluxDB-Connector
,InfluxDB
andGrafana
)- How to deploy the KUKSA cloud components with Docker Compose.
The end-goal here is to deploy these applications as Docker containers as the figure below and establish connectivity among these containerized applications.
Installing Docker and Docker Compose¶
Install Docker from the standard Ubuntu repository:
$ sudo snap install docker
- If you don’t install Docker with
snap
, it is possible to face version conflict with Docker Compose. - Docker installation with
snap
includes Docker Compose installation.
Check the version:
$ docker --version $ docker-compose --version
If you don’t want to preface the
docker
command withsudo
, create thedocker
group and add your user to thedocker
group:$ sudo groupadd docker $ sudo usermod -aG docker $USER $ newgrp docker
Log out and log back in to re-evaluate your group membership.
Run
docker
commands withoutsudo
to verify that the changes have been applied:$ docker run hello-world
Now you are ready to proceed. If you only want to test the connectivity with the default DIAS-KUKSA setting, you can directly go to Deployment with Docker Compose.
Modifying and creating a Docker image for Hono-InfluxDB-Connector¶
Unlike InfluxDB
and Grafana
, Hono-InfluxDB-Connector
is an application that is only designed to serve a particular task. This means that the application needs to be changed according to the target metrics. Since the application cannot be generic but only user-specific, it is important to understand how to make changes on the application, build a new Docker image with the new changes and push it to the Docker Hub registry. One might ask why the application needs to be docker-containerized and pushed to Docker Hub when one could simply run the result Jar file on a local machine. This can be easily explained with the figure below.
The figure describes the following scenario:
- Docker Host 1 builds the
Hono-InfluxDB-Connector
image by running its Dockerfile. During the build process,Maven
andJava
images are pulled to build the executable Jar file.- After the Jar file is created, the Docker image is produced. Then Docker Host 1 pushes the Jar file to the Docker Hub registry in the Internet. (To do this, one needs to login to DockerHub on a local terminal to designate the destination repository.)
- Once the
Hono-InfluxDB-Connector
image is available on Docker Hub, the other hosts (2, 3, 4) can also use the image as long as the Internet access is available and Docker (and Docker Compose) is (are) installed locally. Finally the other Docker hosts (2, 3, 4) pull and runHono-InfluxDB-Connector
along withInfluxDB
andGrafana
through Docker Compose. The produced containers from Docker Compose are set to interact with each other according to the configuration setting indocker-compose.yml
.
As already mentioned in 3), it doesn’t require for the rest of the Docker hosts (2, 3, 4) to pull and update the code according to the recent changes and build it with Maven
to create the executable Jar file because the updated Hono-InfluxDB-Connector
Docker image is already available on Docker Hub. All they need are Docker and Docker Hub installed locally with the Internet access and the pull-address of the updated image. This makes it possible to avoid repetitive tasks such as: pulling the source code repository, making changes and building the application with Maven
to create the executable Jar file. In this way, a user can simply pull the application image from Docker Hub and run a container out of the image.
- Make changes in
dias_kuksa/utils/cloud/maven.consumer.hono/src/main/java/maven/consumer/hono/ExampleConsumer.java
according to your purpose.
- The changes should be made depending on the telemetry message sent by
cloudfeeder.py
. Please consider the format of the message or the availability of intended metrics in the message.
To create a Docker image out of
Hono-InfluxDB-Connector
, a Dockerfile is required. The Dockerfile forHono-InfluxDB-Connector
is located indias_kuksa/utils/cloud/maven.consumer.hono/
. The Dockerfile consists of two different stages: Jar Building and Image Building. The Dockerfile can be self-explained with the comments in it. Navigate todias_kuksa/utils/cloud/maven.consumer.hono/
and build the Docker image by commanding:$ docker build -t hono-influxdb-connector .
Assuming a Docker Hub account has already been made (Please make it in this link if you haven’t), log into Docker Hub on your terminal by commanding:
$ docker login --username={$USERNAME} --password={$PASSWORD}
Before pushing
hono-influxdb-connector
to your Docker Hub repository, tag it according to the following convention:$ docker tag hono-influxdb-connector {$USERNAME}/hono-influxdb-connector
This way, the tagged Docker image would be directed to your respository on Docker Hub and archieved there when pushed.
Push the tagged Docker image:
$ docker push {$USERNAME}/hono-influxdb-connector
(Optional) When you want to pull the image from Docker Hub on another Docker host, simply command:
$ docker pull {$USERNAME}/hono-influxdb-connector
Configuring a Grafana’s Data Source, Dashboard and Notifier¶
The above shows 7 dashboards that are created based on Bosch’s DIAS-KUKSA implementation. The following is one of the first 6 NOx-map dashboards.
As named in the screenshot above, the depicted dashboard represents “DIAS-BOSCH NOx Bin Map - TSCR (Bad)” that consists of 12 status panels that each of which describes a data bin and has three metrics: Sampling Time (s)
, Cumulative NOx DS (g)
and Cumulative Work (J)
. Each and every metric here comes from the InfluxDB
data source. The rest of the first 6 dashboards follow the same format. The following is the last dashboard.
As shown above, the last dashboard is to keep track of the cumulative time of bin-data sampling. This dashboard is meant to send the administrator user an alert through a notifer feature if a certain sampling time threshold is met.
All these dashboards are simply designed to monitor a specific set of data stored in InfluxDB
by Hono-InfluxDB-Connector
conforming their intended purposes.
Since the Grafana Docker image is offered without any pre-configured dashboard and panel options, it could be easily presumed that users might have to set InfluxDB
as a data source, create these dashboards with multiple panels and set a notifier via Email in Grafana
manaually for several Docker hosts (Virtual Machines) everytime they deploy the application, which takes a lot of handwork and can be considered significantly inefficient.
Grafana
’s provisioning system helps users with this problem. With the provisioning system, data sources, dashboards and notifiers can be defined via config files such as YML and JSON that can be version-controlled with Git
.
- To set data sources when deploying
Grafana
with Docker Compose, a YML configuration file can be used. Underdias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/grafana_config/grafana-provisioning/
, there isdatasources/
withdatasource.yml
inside.
datasource.yml
contains the same information used to set a data source manually on the Grafana web-page (Grafana Server > Configuration > Add data source: “InfluxDB”, “URL”, “Database”, “User”, “Password”).
- Likewise, to set data sources when deploying
Grafana
with Docker Compose, a YML and a JSON configuration files can be used. Under the same/grafana-provisioning/
directory, there isdashboards/
withdashboard.yml
andnox_map_dashboard.json
inside.
dashboard.yml
states the name of the data source that dashboards receive data from and the path that the file would be located inside theGrafana
container when it runs.
- To create such dashboard JSON file, one needs to create a dashboard manually on Grafana, and export it as a JSON file (Grafana Server > Dashboards > Your_Target_Dashboard > Save dashboard (on the top) > “Save JSON to file”). Then rename it according to your preference. (e.g.,
nox_map_dashboard.json
)
- As stated earlier, the last panel with the title of “Cumulative Bin Sampling Time” keeps track of the cumulative sampling time of data collection. If the point of evaluation is set to 10 hours, the threshold of the panel for notification would be 36000 considering sampling is done every second (10h = 600m = 36000s) approximately. When it finally reaches the threshold, Grafana would send a message to the registered email to notify the user that it is time to evaluate which can be done by setting
notifier.yml
in/grafana-provisioning/notifiers/
.
notifier.yml
states the type of notifier (e.g., Email, Slack, Line, etc…) and the receiver’s addresses in case when Email is chosen as the notifier type. If there are more than one receivers, multiple addresses can be added with semi colons that separate email addresses as shown in the screenshot. The result can be checked inAlerting > Notification Channels
in the Grafana web-server page.
- Now that you have set a notifier, you have to set an alert rule for you to receive a message from Grafana in a certain condition. The first screenshot above shows a condition that the alert is triggered when the query A,
total_sampling_time
, is above 300. The second screenshot above shows the kind of message a receiver’s phone would receive viaGmail
if the condition is met.
grafana.ini
is located indias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/grafana_config/
and needs to be configured to enable SMTP (Simple Mail Transfer Protocol). Simply speaking, this is to set a sender’s Email account. In the case of Gmail, the address of SMTP host server issmtp.gmail.com:465
(Click here to learn more about SMTP servers). Then set the sender’s Email address,user
, and password,password
. To use a Gmail account, one needs to have 2FA enabled for the account and then create an APP password forpassword
(Click here to learn more about the APP password).from_address
andfrom_name
are set to change the sender’s information in the receiver’s perspective.- At the time of writing this documentation, only the graph panel visualization supports alerts as stated here.
It can be noticed that all configuration files for Grafana
are located under /grafana_config/grafana-provisioning/
and /grafana_config/
. These directories would later be used by Docker Compose to provision Grafana
with data sources, dashboards and notifiers. Next, the explanation to the Docker Compose configuration file is followed.
Configuration Setup¶
docker-compose.yml
runs three services (InfluxDB
,Hono-InfluxDB-Connector
,Grafana
) here. Since all three services should be connected to each other, they need to be under the same network. Therefore a user-defined bridge network,monitor_network
, needs to be configured under every service here:networks: - monitor_network networks: monitor_network:
Hono-InfluxDB-Connector`(`connector
) andGrafana`(`grafana
) have a dependency onInfluxDB`(`influxdb
). Therefore a dependency needs to be configured underconnector
andgrafana
:depends_on: - influxdb
Since the
connector
service is just a data intermediary, it doesn’t need to be persistent. On the other hand,influxdb
andgrafana
should be persistent if a user wants to save the accumulated data or metadata even when the services are taken down. Therefore a user-defined volume needs to be configured under each ofinfluxdb
andgrafana
:volumes: - influxdb-storage:/var/lib/influxdb volumes: - grafana-storage:/var/lib/grafana - ./grafana_config/grafana.ini:/etc/grafana/grafana.ini - ./grafana-provisioning/:/etc/grafana/provisioning/ volumes: influxdb-storage: grafana-storage:
Here, /grafana_config/grafana.ini:/etc/grafana/grafana.ini
and /grafana-provisioning/:/etc/grafana/provisioning/
are additionally added for grafana
. These are to provision grafana
with the datasource, dashboard and notifier that have been configured in Configuring a Grafana’s Data Source, Dashboard and Notifier. Therefore docker-compose.yml
finds grafana_config/grafana.ini
and grafana-provisioning/
in the current directory and map them to /etc/grafana/grafana.ini
and /etc/grafana/provisioning/
respectively that are in the grafana
Docker service’s file system. Likewise, each of internally defined volumes (influxdb-storage
and grafana-storage
) are mapped to the corresponding directory in the target service’s file system.
- The information of username and password to connect to each
influxdb
andgrafana
server, and that of the targetBosch-IoT-Hub
instance can be provided for theconnector
service with theenv
file as they can be dynamic depending on the user.env
is in the same directory wheredocker-compose.yml
is located and is hidden by default.
The information needs to be stated in docker-compose.yml
as well:
environment:
- INFLUXDB_DB=dias_kuksa_tut
- INFLUXDB_ADMIN_USER=${INFLUXDB_USERNAME}
- INFLUXDB_ADMIN_PASSWORD=${INFLUXDB_PASSWORD}
command: --hono.client.tlsEnabled=true --hono.client.username=messaging@${HONO_TENANTID} --hono.client.password=${HONO_MESSAGINGPW} --tenant.id=${HONO_TENANTID} --export.ip=influxdb:8086
environment:
- GF_INSTALL_PLUGINS=natel-plotly-panel,vonage-status-panel # to add plugins
- GF_SECURITY_ADMIN_USER=${GRAFANA_USERNAME}
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
INFLUXDB_DB=dias_kuksa_tut
: The database is set asdias_kuksa_tut
because it is the name of the database thatHono-InfluxDB-Connector
is targetting at.
export.ip
follows{$SERVICE_NAME_IN_DOCKER-COMPOSE-FILE}:{$PORT_NUMBER_IN_DOCKER-COMPOSE-FILE}
. Therefore it isinfluxdb:8086
.GF_INSTALL_PLUGINS=natel-plotly-panel,vonage-status-panel
: The NOx Map dashboard that we are trying to provision uses thevonage-status-panel
plugin that is not provided by default.natel-plotly-panel
is just addtional to show how multiple panel-plugins can be added.
Deployment with Docker Compose¶
- Make sure a
Bosch-IoT-Hub
instance is up and running. If you haven’t brought it up, please do it now by following kuksa.cloud - Eclipse Hono (Cloud Entry). - Make sure you have Docker and Docker Compose installed in your machine. If you haven’t installed, please do it now by following Installing Docker and Docker Compose.
- In the dias_kuksa repository, you can find the
docker-compose.yml
file in ‘dias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/’. With one command you can deploy all the applications according to the pre-configured setting in the file. But there are few things that need to be done by each user.
3-1. In env
, change HONO_TENANTID
and HONO_MESSAGINGPW
according to your Bosch-IoT-Hub
instance’s credentials.
3-2. According to docker-compose.yml
, influxDB
, connector
and grafana
are deployed on port 8086, 8080 and 3000 respectively. Therefore the corresponding ports should be available before running Docker Compose. To see the availability of a certain port, one can use net-tools
. With this, one can also kill any service that is running on a certain port to make it available for the target application. Install net-tools
and list PIDs on port 8086 (InfluxDB - 8086, Connector - 8080, Grafana - 3000):
$ sudo apt install net-tools
$ sudo netstat -anp tcp | grep 8086
By now, a list of PIDs would be shown on the terminal.
3-3. Assuming the number of PID that is running on port 8086 is 13886, you can kill the PID with the following command:
$ sudo kill 13886
3-4. Stop InfluxDB
and Grafana
if they are already running locally without using Docker:
$ sudo service influxdb stop
$ sudo service grafana-server stop
- Because they are set to be running on port 8086, 8080 and 3000 respectively, it makes sense to stop them to secure the corresponding ports before running Docker Compose.
Now that you have made sure all three ports (8080, 8086 and 3000) are available, navigate to
dias_kuksa/utils/cloud/connector-influxdb-grafana-deployment/
where thedocker-compose.yml
file is located and command the following:$ docker-compose up -d
If there is no error output, you have successfully deployed all applications configured in the docker-compose.yml
file.
Double-check whether three containers are created and working properly:
$ docker ps
Make sure Hono-InfluxDB-Connector
, InfluxDB
and Grafana
are in the “Up” status.
- Now you should be able to access to the Grafana server through a web-browser.
6-1. Open a browser and access to http://0.0.0.0:3000/
.
6-2. Log in with the admin account:
Email or username: admin
Password: admin
6-3. You can access and monitor the provisioned NOx map dashboard (Dashboards > NOx Map Dashboard). Change the time range according to your preference.
In case where the provisioned dashboard is not displayed on the main page, please hover over “Dashboards” on the left-side bar and then go to “Manage”. You would be able to see “NOx Map Dashboard” under the “General” folder.
<Additional Docker Compose commands>
- To stop your services once you have finished with them::
- $ docker-compose down
- To also remove the data volume used by the containers::
- $ docker-compose down –volumes
Deployment Option 3 - Azure Kubernetes Service (AKS)¶
** WORK IN PROGRESS… **
(Additional) dias_kuksa - InfluxDB-Consumer¶
Since there are possibly more applications that use InfluxDB other than Grafana, it makes sense to create a consumer application that fetches data from InfluxDB and makes them available for any purposes.
- There is an InfluxDB consumer Python script,
influxDB_consumer.py
, indias_kuksa/utils/cloud/
. - The script fetches the last data under certain keys from the local InfluxDB server and store them in the corresponding Python dictionary to each key by using the function,
storeNewMetricVal
. Then you can use the data in the Python dictionary according to your purpose and goals.
Step 4: Simulation¶
When everything from step 1 to 3 is set up, you can finally test to see whether or not they communicate each other and work in a correct way.
If your CAN interface is physical one (e.g.,
can0
), you can use either a simulation tool such as Vector CANalyzer, CANoe and etc, or connect to the actual CAN in a vehicle.If your CAN interface is virtual one (e.g.,
vcan0
), you can usecanplayer
from thecan-utils
library. Before runningcanplayer
, you need to prepare a proper .log file that should be used as an argument to runcanplayer
. The .log file that is used here was originally logged with CANalyzer in the .asc format, and converted to the .log format with a Python script.To log CAN traces with CANalyzer and get the .asc file (that should be converted to the .log format later) or get the .log file directly with Raspberry-Pi tools, you can follow Reference: Logging CAN traces.
The followings describe how to convert a .asc file to a .log file and simulate CAN with the .log file in your Raspberry-Pi. So that you can verify whether or not your setups function correctly from In-vehicle to Cloud.
asc2log Conversion¶
Since canplayer
from the can-utils
library only takes the .log format, the existing .asc file should be converted to the .log format.
Make sure that all KUKSA components from In-vehicle to Cloud have been set up from the previous steps.
canplayer
can be run in the same in-vehicle machine (e.g., your Raspberry-Pi). Therefore you should be in your Raspberry-Pi to proceed further.Navigate to
dias_kuksa/utils/canplayer/
whereasc2log_channel_separator.py
with two .asc files (omitted due to the copyright issue and thus shared on request) are located.otosan_can0-30092020.asc
is logged with CAN channel 0 whileotosan_can2-30092020.asc
with channel 2 from the Ford Otosan truck.Since
canplayer
can not play a .asc file, you have to convert them to the .log format. You can do this conversion withasc2log_channel_separator.py
.As the discription of
asc2log_channel_separator.py
states, the script not only performs theasc2log
conversion but also separates the result by CAN channel in case the target .asc file has traces from more than one CAN channel. If the target .asc file has traces from only one CAN channel, the script would only produces one result .log file.Prior to running
asc2log_channel_separator.py
, thecan-utils
library should be installed first. If you have followed the steps from the beginning, you have already installed this library from can-utils.To convert
otosan_can0-30092020.asc
, navigate todias_kuksa/utils/canplayer/
and command the following:$ python3 asc2log_channel_separator.py --asc otosan_can0-30092020.asc --can vcan0
vcan0
should be already configured before running this command. If you haven’t, please follow CAN Interface Option 1 - Virtual CAN (Logfile Simulation Purpose) and run the command above.
- As a result from 8,
can0_otosan_can0-30092020.log
would be created.
Simulation with canplayer¶
- Now that we have the .log file to play, make sure your in-vehicle components are already up and running.
- Configuring
vcan0
withkuksa-val-server.exe
anddbcfeeder.py
is mandatory,cloudfeeder.py
and other cloud-side components are optional here.
To run
canplayer
with the target .log file,can0_otosan_can0-30092020.log
, navigate todias_kuksa/utils/canplayer/
, where the .log file is located, and command the following:$ canplayer -I can0_otosan_can0-30092020.log
- You should be able to see signals being updated on both terminals,
kuksa-val-server.exe
anddbcfeeder.py
, as shown in the screenshots below.
- Although the screenshots are taken in an Ubuntu virtual machine for convenience, the environment for this simulation is meant to be Raspberry-Pi.
Reference: Logging CAN traces¶
It would be tedius if you have to get inside the target vehicle, set up the simulation environment and test everytime there is a new update on your implementation. Which is why having a CAN trace log file is important because it eases the development process. With a log file, you can develop and test your application on your desk without having to be in the vehicle which saves time and energy that otherwise would be spent significantly on setting up the test environment.
Although canplayer
from the can-utils
library in Raspberry-Pi is compliant only with the .log format, it is recommended to use the Vector tools to get CAN traces since they can provide the traces in a variety of formats such that the traces can be used with not only canplayer
but also several other means in different environments. Therefore the two ways, with and without the Vector tools, to log CAN traces in the target vehicle are introduced here.
Option 1: with Vector Tools¶
Hardware Prerequisites¶
- Laptop installed with Vector Software
- Licensed Vector CANcase (VN1630 is used here)
- USB Interface for CAN and I/O (comes with CANcase)
- CAN Cable (D-sub /D-sub) x 1
- CAN Adapter (Open cable to D-sub) x 1
Software Prerequisites¶
- Vector Software (CANalyzer Version 13.0 SP2 is used here)
Logging with Vector Tools¶
- Connect the Vector CANcase to the CAN H/L from an ECU in the vehicle by using the CAN cable and adapter. For this you also need to refer to the ECU hardware specification to get to know which ports of the ECU stand for CAN-high and -low of what number of CAN channel.
- Connect the Vector CANcase to your laptop and check if the device manager recognizes the CANcase.
- Because the CANcase used here is Vector VN1630, it shows the exact name of the CANcase.
- Run CANalyzer 13.0 SP2.
- The capture shows when your CANcase is properly licensed with
CANalyzer PRO 13.0
. Press “OK” to proceed.
- The capture shows when your CANcase is not licensed. You can not proceed further in this case.
- The first thing you would see in CANalyzer is the “Trace” tab. Here you can see the incoming CAN traces when they are being read.
- To synchronize your CANcase with the target vehicle’s baudrate, you have to configure manually in CANalyzer. To do this, switch to the “Configuration” tab.
- When you double-click the CANcase icon, a window named “Network Hardware Configuration” would show up. Select the CAN channel (VN1630: written on the back side of CANcase) that you connected to the CAN ports of the target vehicle and set the baudrate the same as that of the vehicle. Then click “OK”.
- To enable the logging function, find the “Logging” box on the right hand side of the configuration tap and double-click the small node on the left.
- Confirm that the “Logging” box is enabled as the capture below.
- To change the destination folder or the result file format, double-click the folder-shaped icon on the right and set them as you prefer.
- If you want to use the result for
canplayer
in Raspberry-Pi, set the result file format as “ASCII Frame Logging (*.asc)”. That way, you can convert your result to the .log format by runningasc2log_channel_separator.py
that can be found indias_kuksa/utils/canplayer/
.
- Make sure everything is properly connected and configured. You can now start logging CAN traces by pressing the “Start” button on the top left hand corner.
- If working correctly, you are supposed to able to see the incoming CAN traces on the “Trace” tab.
Option 2: with Raspberry-Pi and CAN Shield¶
Hardware Prerequisites¶
- Laptop to ssh Raspberry-Pi
- Raspberry Pi 3 or 4
- CAN Shield (SKPang PiCan2 or Seeed 2 Channel CAN)
- CAN Cable (D-sub /D-sub) x 1
- CAN Adapter (Open cable to D-sub) x 1
Software Prerequisites¶
- Network that can be shared by the laptop and Raspberry-Pi (for SSH purpose, you can also use your mobile hotspot.)
- The
can-utils
library (can-utils)
Logging with Raspberry-Pi and CAN Shield¶
Assuming the CAN shield is already attached to Raspberry-Pi, connect the shield to the CAN H/L from an ECU in the vehicle by using the CAN cable and adapter. For this you also need to refer to the ECU hardware specification to get to know which ports of the ECU stand for CAN-high and -low of what number of CAN channel.
Once you successfully ssh Raspberry-Pi, you would be on your Raspberry-Pi’s terminal. Install the
can-utils
library if you haven’t yet:$ sudo apt install can-utils
Configure the CAN shield.
- For SKPang PiCan2, refer to CAN Interface Option 2 - SKPang PiCan2 (Only for Raspberry-Pi).
- For Seeed 2 Channel CAN, refer to CAN Interface Option 3 - Seeed 2-Channel Shield (Only for Raspberry-Pi).
Make sure everything is properly connected and configured. Assuming the name of the configured CAN interface is
can0
, command the following:$ candump -l can0
- If working correctly, you are supposed to able to see the .log file named with the current time (e.g.,
candump-2020-10-06_163848.log
) in the same directory where the terminal is open.
- If you want to stop logging, input
ctrl
+c
and check the result .log file to see if CAN traces have been logged properly.
Future Work¶
Many implementations and tests have been left for the future due to the limited time but the topic has so much potential to be developed further. The future work concerns the followings:
Contact¶
Name: Junhyung Ki
Bosch Email: fixed-term.Junhyung.Ki@de.bosch.com
Personal Email: kijoonh91@gmail.com
Student Email: junhyung.ki001@stud.fh-dortmund.de