Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Purkinje support to batch simulations and minor fixes #61

Merged
merged 14 commits into from
Dec 4, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 9 additions & 26 deletions ToDo
Original file line number Diff line number Diff line change
@@ -1,38 +1,21 @@
#TODO
Report line number only for the debug build in the helpers functions.

Verify the possibility to not copy the SV from the GPU to the CPU when the ODE solver and the PDE solver are using the GPU.
1) Report line number only for the debug build in the helpers functions.

Allow the writing of multiple AP traces from different cells (Purkinje and tissue). Follow Cristian suggestion to use a file with the following syntax:
2) Verify the possibility to not copy the SV from the GPU to the CPU when the ODE solver and the PDE solver are using the GPU.

<x> <y> <z> <domain_id>
<x> <y> <z> <domain_id>
.
.
.
<x> <y> <z> <domain_id>
3) Make the "vm_threashold" calculus dynamic inside the code. The idea is to calculate the value using only an input percentage (e.g: APD_90, APD_80)

Make the "vm_threashold" calculus dynamic inside the code. The idea is to calculate the value using only an input percentage (e.g: APD_90, APD_80)
4) Improve activation time calculus to use the maximum derivative (check Lucas's old function from 2018)
- Optimize the activation time calculus
- Parallelize with #pragma and avoid race conditions inside the "persistent_data" structure
- Use simple arrays ? Will consume more memory, but it will be faster.

Improve activation time calculus to use the maximum derivative (check Lucas's old function from 2018)
5) Remove depracated code from src/linear_system_solver_library/gpu_solvers_cublas_11.c. Use https://github.com/NVIDIA/CUDALibrarySamples/blob/master/cuSPARSE/spsv_csr/spsv_csr_example.c as reference. Change cusparseScsrsv2 to cusparseSpSV.

Think a way to optimize the activation time calculus. Try to reduce the simulation time, especially when using the GPU

Remove depracated code from src/linear_system_solver_library/gpu_solvers_cublas_11.c. Use https://github.com/NVIDIA/CUDALibrarySamples/blob/master/cuSPARSE/spsv_csr/spsv_csr_example.c as reference. Change cusparseScsrsv2 to cusparseSpSV.
6) Find a way to write the INa, Ito and ICaL currents in the ToRORd and Trovato models.

#KNOW ISSUES:
The logger symbols are only exported to an executable if an static library linked to a shared library uses then. For now this is ok. But I think it will be a pain in future releases.
The GPU linear system solver is not working for purkinje-only simulations
When the minimum number of PMJs is not reached the solver will be in an infinite loop

## Scripts:
- Rewrite the 'trace_plot' script in a more general way.
- The output should be written in the limpetGUI format as well:

<t> <sv_1> <sv_2> <sv_3> ... <sv_n>
. . . . .
. . . . .
. . . . .


Fix the "save_multiple_cell_state_variables". It cannot find the proper cell center.
4 changes: 2 additions & 2 deletions build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ for i in "${BUILD_ARGS[@]}"; do
COMPILE_CONVERTER='y'
COMPILE_FIBER_CONVERTER='y'
COMPILE_POSTPROCESSOR='y'
COMPILE_EXPAND='y'
#COMPILE_EXPAND='y'
;;
simulator)
COMPILE_SIMULATOR='y'
Expand Down Expand Up @@ -240,7 +240,7 @@ if [ -n "$COMPILE_MPI" ]; then
DYNAMIC_DEPS="$DYNAMIC_DEPS $MPI_LIBRARIES"
EXTRA_LIB_PATH="$EXTRA_LIB_PATH $CUDA_LIBRARY_PATH $MPI_LIBRARY_PATH $LIBRARY_OUTPUT_DIRECTORY"

COMPILE_EXECUTABLE "MonoAlg3D_batch" "$SRC_FILES" "$HDR_FILES" "$STATIC_DEPS" "$DYNAMIC_DEPS" "$CUDA_LIBRARY_PATH $CUDA_MATH_LIBRARY_PATH $EXTRA_LIB_PATH" "$INCLUDE_P"
COMPILE_EXECUTABLE "MonoAlg3D_batch" "$SRC_FILES" "$HDR_FILES" "$STATIC_DEPS" "$DYNAMIC_DEPS" "$CUDA_LIBRARY_PATH $CUDA_MATH_LIBRARY_PATH $EXTRA_LIB_PATH" "-I$MPI_INCLUDE_PATH"
fi

fi
Expand Down
38 changes: 38 additions & 0 deletions scripts/concatenate_alg_columns.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# ==================================================================================================
# Author: Lucas Berg @bergolho
# Date: 10/11/2023
# Script to combine the ALG and FIBERS file into a single one.
# --------------------------------------------------------------------------------------------------
# Dependencies: Pandas and NumPy
# ==================================================================================================

import sys
import numpy as np
import pandas as pd

def main():
if len(sys.argv) != 3:
print("-------------------------------------------------------------------------")
print("Usage:> python %s <input_alg_file> <input_fibers_file>" % sys.argv[0])
print("-------------------------------------------------------------------------")
print("<input_alg_file> = Input ALG file with center, discretization and extra data")
print("<input_fibers> = Input FIBERS file with (f,s,n)")
print("-------------------------------------------------------------------------")
return 1

input_alg_filename = sys.argv[1]
input_fibers_filename = sys.argv[2]

alg_data = pd.read_csv(input_alg_filename, sep=',')
#print(alg_data)

fibers_data = pd.read_csv(input_fibers_filename, sep=' ')
#print(fibers_data)

result_data = pd.concat([alg_data, fibers_data], axis=1)
#print(result_data)

result_data.to_csv("output.alg", header=True, index=False)

if __name__ == "__main__":
main()
5 changes: 4 additions & 1 deletion src/domains_library/mesh_info_data.h
Original file line number Diff line number Diff line change
Expand Up @@ -11,15 +11,16 @@ struct default_fibrotic_mesh_info {
};

struct dti_mesh_info {

enum dti_transmurality_labels {
DTI_FAST_ENDO,
DTI_ENDO,
DTI_MID,
DTI_EPI
} dti_transmurality_labels;
float transmurality;
float apicobasal;
float base_apex_heterogeneity;
int fast_endo;
};

#define FIBROTIC_INFO(grid_cell) (struct default_fibrotic_mesh_info *)(grid_cell)->mesh_extra_info
Expand All @@ -42,5 +43,7 @@ struct dti_mesh_info {
#define DTI_MESH_TRANSMURALITY_LABELS(grid_cell) (DTI_MESH_INFO(grid_cell))->dti_transmurality_labels
#define DTI_MESH_TRANSMURALITY(grid_cell) (DTI_MESH_INFO(grid_cell))->transmurality
#define DTI_MESH_BASE_APEX_HETEROGENEITY(grid_cell) (DTI_MESH_INFO(grid_cell))->base_apex_heterogeneity
#define DTI_MESH_APICOBASAL(grid_cell) (DTI_MESH_INFO(grid_cell))->apicobasal
#define DTI_MESH_FAST_ENDO(grid_cell) (DTI_MESH_INFO(grid_cell))->fast_endo

#endif /* __MESH_INFO_DATA_H */
12 changes: 12 additions & 0 deletions src/gpu_utils/gpu_utils.cu
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,15 @@ __global__ void kernel_gpu_vec_div_vec(real *vec1, real *vec2, real *vec3, size_
}
}

// Adapted from: https://stackoverflow.com/questions/68823023/set-cuda-device-by-uuid
void uuid_print (cudaUUID_t a){
printf("GPU UUID ");
int r[5][2] = {{0,4}, {4,6}, {6,8}, {8,10}, {10,16}};
for (int i = 0; i < 5; i++) {
printf("-");
for (int j = r[i][0]; j < r[i][1]; j++) {
printf("%02x", (unsigned)(unsigned char)a.bytes[j]);
}
}
printf("\n");
}
1 change: 1 addition & 0 deletions src/gpu_utils/gpu_utils.h
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ extern "C" {
void gpu_vec_div_vec(real *vec1, real *vec2, real* res, size_t n);
real gpu_ecg_integral(real *beta_im, real *distances, real *volumes, size_t vec_size);
void cuda_assert(int code, char const *const func, const char *const file, int const line, const char *api);
void uuid_print (cudaUUID_t a);
#ifdef __cplusplus
}
#endif
Expand Down
3 changes: 2 additions & 1 deletion src/main_converter.c
Original file line number Diff line number Diff line change
Expand Up @@ -56,13 +56,14 @@ static void convert_file(const char *input, const char *output, const char *file
set_vtk_grid_values_from_ensight_file(vtk_grid, full_input_path);
}

char ext[4] = ".vtu";
char ext[5] = ".vtu\0";
bool alg = false;

if(FILE_HAS_EXTENSION(input_file_info, "alg")) {
ext[1] = 'v';
ext[2] = 't';
ext[3] = 'k';
ext[4] = '\0';
alg = true;
}

Expand Down
38 changes: 19 additions & 19 deletions src/monodomain/monodomain_solver.c
Original file line number Diff line number Diff line change
Expand Up @@ -333,11 +333,14 @@ int solve_monodomain(struct monodomain_solver *the_monodomain_solver, struct ode

if(the_ode_solver->gpu && linear_solver_on_gpu) {
log_info("%d devices available, running both ODE and linear system solvers on GPU (device %d -> %s)\n", device_count, device, prop.name);
uuid_print(prop.uuid);
} else if(the_ode_solver->gpu) {
log_info("%d devices available, running only the ODE solver on GPU (device %d -> %s)\n", device_count, device, prop.name);
uuid_print(prop.uuid);
}
else {
log_info("%d devices available, running only the linear system solver on GPU (device %d -> %s)\n", device_count, device, prop.name);
uuid_print(prop.uuid);
}

check_cuda_error(cudaSetDevice(device));
Expand Down Expand Up @@ -1485,7 +1488,7 @@ void compute_pmj_current_tissue_to_purkinje(struct ode_solver *the_purkinje_ode_
}
}

// TODO: Find a better place to this function ...
// TODO: Maybe write a library to the PMJ coupling ...
void write_pmj_delay (struct grid *the_grid, struct config *config, struct terminal *the_terminals) {
assert(the_grid);
assert(config);
Expand All @@ -1506,10 +1509,11 @@ void write_pmj_delay (struct grid *the_grid, struct config *config, struct termi

FILE *output_file = NULL;
output_file = fopen(output_dir_with_file, "w");
fprintf(output_file,"curPulse,curTerm,pkLAT,meanTissLAT,pmjDelay,isActive\n");
fprintf(output_file,"curPulse,curTerm,pkLAT,meanTissLAT,pmjDelay,isActive,hasBlock\n");

uint32_t num_terminals = the_grid->purkinje->network->number_of_terminals;

bool has_block;
uint32_t purkinje_index;
struct node *purkinje_cell;
struct point_3d cell_coordinates;
Expand Down Expand Up @@ -1537,7 +1541,8 @@ void write_pmj_delay (struct grid *the_grid, struct config *config, struct termi
//fprintf(output_file,"====================== PULSE %u ======================\n", k+1);
for(uint32_t i = 0; i < num_terminals; i++) {

uint32_t term_id = i;
has_block = false;
uint32_t term_id = i;
bool is_terminal_active = the_terminals[i].active;

// [PURKINJE] Get the informaion from the Purkinje cell
Expand All @@ -1561,7 +1566,7 @@ void write_pmj_delay (struct grid *the_grid, struct config *config, struct termi
real_cpu purkinje_lat = activation_times_array_purkinje[k];
//fprintf(output_file,"Terminal %u --> Purkinje cell %u --> LAT = %g\n", i, purkinje_index, purkinje_lat);

// [TISSUE] Get the informaion from the Tissue cells
// [TISSUE] Get the information from the Tissue cells
struct cell_node **tissue_cells = the_terminals[i].tissue_cells;
uint32_t number_tissue_cells = arrlen(tissue_cells);

Expand All @@ -1584,35 +1589,30 @@ void write_pmj_delay (struct grid *the_grid, struct config *config, struct termi

// Check if the number of activations from the tissue and Purkinje cell are equal
if(n_activations_purkinje > n_activations_tissue) {
log_error("[purkinje_coupling] ERROR! The number of activations of the tissue and Purkinje cells are different!\n");
log_error("[purkinje_coupling] Probably there was a block on the anterograde direction!\n");
log_error("[purkinje_coupling] Consider only the result from the second pulse! (retrograde direction)!\n");
fprintf(output_file,"ERROR! Probably there was a block on the anterograde direction!\n");
cur_pulse = 0;
return;
//log_error("[purkinje_coupling] ERROR! The number of activations of the tissue and Purkinje cells are different!\n");
//log_error("[purkinje_coupling] Probably there was a block on the anterograde direction!\n");
//log_error("[purkinje_coupling] Consider only the result from the second pulse! (retrograde direction)!\n");
//fprintf(output_file,"ERROR! Probably there was a block on the anterograde direction!\n");
has_block = true;
cur_pulse = 0;
//return;
}
mean_tissue_lat += activation_times_array_tissue[cur_pulse];
if (activation_times_array_tissue[cur_pulse] < min_tissue_lat) {
min_tissue_lat = activation_times_array_tissue[cur_pulse];
min_tissue_lat_id = tissue_cells[j]->sv_position;
}

}

if (is_terminal_active) {
mean_tissue_lat /= (real_cpu)number_tissue_cells;

// PMJ delay is calculated using the mean LAT of the coupled tissue cells minus the LAT of the terminal Purkinje cell
real_cpu pmj_delay = (mean_tissue_lat - purkinje_lat);

// version 1 = pulse_id, terminal_id, purkinje_lat, mean_tissue_lat, pmj_delay, is_active
//fprintf(output_file,"%d,%u,%g,%g,%g,%d\n", k, term_id, purkinje_lat, mean_tissue_lat, pmj_delay, (int)is_terminal_active);

// version 2 = pulse_id, purkinje_terminal_id, min_tissue_coupled_cell_id, purkinje_lat, min_tissue_lat, pmj_delay, is_active
fprintf(output_file,"%d,%u,%u,%g,%g,%g,%d\n", k, term_id, min_tissue_lat_id, purkinje_lat, min_tissue_lat, pmj_delay, (int)is_terminal_active);
// pulse_id, terminal_id, purkinje_lat, mean_tissue_lat, pmj_delay, is_active, has_block
fprintf(output_file,"%d,%u,%g,%g,%g,%d,%d\n", k, term_id, purkinje_lat, mean_tissue_lat, pmj_delay, (int)is_terminal_active, (int)has_block);

//log_info("[purkinje_coupling] Terminal %u (%g,%g,%g) [Pulse %d] -- Purkinje LAT = %g ms -- Tissue mean LAT = %g ms -- PMJ delay = %g ms [Active = %d]\n", i,
// purkinje_cells[purkinje_index]->center.x, purkinje_cells[purkinje_index]->center.y, purkinje_cells[purkinje_index]->center.z, k,
// purkinje_lat, mean_tissue_lat, pmj_delay, (int)is_terminal_active);
}
}
}
Expand Down
30 changes: 30 additions & 0 deletions src/save_mesh_library/save_mesh_helper.h
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,36 @@ struct save_multiple_cell_state_variables_purkinje_coupling_persistent_data {
uint32_t *purkinje_cell_sv_positions;
};

struct save_multiple_cell_state_variables_purkinje_coupling_with_activation_times_data {
uint32_t num_tissue_cells;
FILE **tissue_files;
char *tissue_file_name_prefix;
real_cpu *tissue_cell_centers;
uint32_t *tissue_cell_sv_positions;

uint32_t num_purkinje_cells;
FILE **purkinje_files;
char *purkinje_file_name_prefix;
real_cpu *purkinje_cell_centers;
uint32_t *purkinje_cell_sv_positions;

struct vtk_unstructured_grid *tissue_grid;
struct point_hash_entry *tissue_last_time_v;
struct point_hash_entry *tissue_num_activations;
struct point_hash_entry *tissue_cell_was_active;
struct point_voidp_hash_entry *tissue_activation_times;
struct point_voidp_hash_entry *tissue_apds;

struct vtk_polydata_grid *purkinje_grid;
struct point_hash_entry *purkinje_last_time_v;
struct point_hash_entry *purkinje_num_activations;
struct point_hash_entry *purkinje_cell_was_active;
struct point_voidp_hash_entry *purkinje_activation_times;
struct point_voidp_hash_entry *purkinje_apds;

bool first_save_call;
};

void add_file_to_pvd(real_cpu current_t, const char *output_dir, const char *base_name, bool first_save_call);
sds create_base_name(char *f_prefix, int iteration_count, char *extension);

Expand Down
Loading
Loading