Skip to content

Commit f3bddf9

Browse files
committed
adding images and README.md
1 parent 7379815 commit f3bddf9

File tree

4 files changed

+59
-1
lines changed

4 files changed

+59
-1
lines changed

README.md

+59-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,59 @@
1-
# lcd_core
1+
# LCD
2+
3+
LCD is a quality of contact estimation supervised deep learning framework for legged robots. It utilizes Force/Torque and IMU measurements to predict the probability of Stable Contact (SC) and Unstable Contact (UC). LCD also works with reduced features (Fz + IMU) in case the robot has point feet but the results are slightly worse on datasets with extremely low friction coefficient surfaces (< 0.07). Additionally, LCD generalizes across platforms, meaning it can predict the quality of contact on gaits of robot A even thought it was trained using data from that particular robot.
4+
5+
6+
## Provided Datasets
7+
8+
The datasets (.csv) provided are self explanatory. Name of robot + number of samples + friction coeff. In the "mixed friction" datasets, the fiction coefficient varies from 0.03 to 1.2. The labels are either 0 (stable contact), 1 (no_contact) and 2 (slip). Labels 1 and 2 are merged into one class by LCD as UC. These samples were collected from the raisim simulated environment and the labels were extracted by using the force and also by utilizing the ground-truth 3D spatial linear and angular velocity of the foot. Each dataset consists of omni-directional walking gaits of the named robot with a sampling rate of 100 Hz.
9+
10+
## Requirements
11+
12+
To train, deploy, and visualize an LCD model the following requirements must be met:
13+
14+
* [pytorch](https://pytorch.org/get-started/locally/)
15+
* pip install tensorboard
16+
* pip install scipy-learn
17+
* pip install pandas
18+
* pip install matplotlib
19+
* git clone https://github.com/mrsp/lcd_core.git
20+
21+
## Training and Testing procedure
22+
23+
Initiate a training and testing loop with:
24+
25+
`python train.py --train-dataset-csv=training_dataset.csv --test-dataset-csv=testing_dataset.csv --epochs=10`
26+
27+
## Training and Testing results with tensorboard
28+
29+
After training one can visualize training and testing performance metrics with
30+
31+
`tensorboard --logdir .\runs`
32+
33+
Indicative training on the `ATLAS_50k_mixedFriction` dataset is illustrated below where the accuracy
34+
and loss are shown:
35+
36+
![Screenshot](img/training.png)
37+
38+
In addition, the confusion matrix is depicted for testing on the `ATLAS_21k_02ground` dataset:
39+
40+
![Screenshot](img/confusion_matrix.png)
41+
42+
43+
## Results
44+
45+
Walking in extremely low coefficient surface scenarios causes the robot to slip but since it transfers its weight to the slipping foot, the vertical ground reaction force is high-valued and thus mainstream algorithms for predicting the contact state fail.
46+
47+
48+
The prediction of LCD on an experiment with an ATLAS robot walking on a surface with friction coefficient below 0.1 are represented in the following figure. The top labels mean stable contact and the bottom mean unstable contact (Slip + fly):
49+
50+
![Screenshot](img/lcd-comparison.png)
51+
52+
The first two steps are on normal friction coefficient and thus the labels behave as expected. Meanwhile the third and forth steps are when the robot slips even though the GRF values are the same as before.
53+
54+
55+
## Reference
56+
57+
For more information regarding LCD kindly check our IROS 2022 paper:
58+
[Robust Contact State Estimation in Humanoid Walking Gaits](
59+
https://ieeexplore.ieee.org/document/9981354)

img/lcd-comparison.png

192 KB
Loading

img/lcd-confusion_matrix.png

70.8 KB
Loading

img/lcd-training.png

128 KB
Loading

0 commit comments

Comments
 (0)