Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: BlueBrain/Atlas-Download-Tools
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: v0.3.2
Choose a base ref
...
head repository: BlueBrain/Atlas-Download-Tools
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: main
Choose a head ref
  • 10 commits
  • 17 files changed
  • 6 contributors

Commits on Oct 13, 2021

  1. Remove pandas and responses from requirements (#90)

    * Remove pandas and responses from requirements
    * Remove the requirements.txt file
    Stannislav authored Oct 13, 2021

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    af73e20 View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    8d5b273 View commit details

Commits on Nov 9, 2021

  1. Restructure the CLI (#96)

    *Create a new `atldld download` subcommand
    * Move `atldld dataset download` to `atldld download dataset`
    Stannislav authored Nov 9, 2021

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    3824394 View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    8ac7513 View commit details

Commits on Apr 27, 2022

  1. Fix CI (#102)

    * Fix CI
    
    * Change black version
    
    * Change black version 2
    EmilieDel authored Apr 27, 2022

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    4b8a7ac View commit details

Commits on May 13, 2022

  1. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    1d17912 View commit details
  2. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    b08093f View commit details

Commits on Jan 24, 2023

  1. Fix CI (Remove 3.6 python, fix numpy ValueError, change syntax for to…

    …x>4) (#109)
    
    
    Co-authored-by: Jan <jankrepl@yahoo.com>
    EmilieDel and jankrepl authored Jan 24, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    0d32e70 View commit details

Commits on Jan 25, 2023

  1. Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    51cfac0 View commit details

Commits on Mar 10, 2023

  1. Don't require opencv-python if opencv is already installed (#110)

    This will allow us to get rid of py-opencv-python in our Spack
    deployment
    heerener authored Mar 10, 2023

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    c943f74 View commit details
6 changes: 3 additions & 3 deletions .github/workflows/run-tests.yml
Original file line number Diff line number Diff line change
@@ -8,14 +8,14 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.6, 3.7, 3.8]
python-version: [3.7, 3.8, 3.9]
include:
- python-version: 3.6
tox-env: py36
- python-version: 3.7
tox-env: py37
- python-version: 3.8
tox-env: py38
- python-version: 3.9
tox-env: py39
steps:
- name: checkout latest commit
uses: actions/checkout@v2
11 changes: 8 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -80,12 +80,17 @@ $ atldld info version --help
$ atldld info cache
```

One can also obtain dataset and dataset information through the CLI.
One can also get dataset information through the CLI.
```bash
$ atldld dataset info $DATASET_ID
$ atldld dataset preview $DATASET_ID
$ atldld dataset download $DATASET_ID
```

To download a dataset use the `atldld download` command.
```shell
$ atldld download dataset $DATASET_ID
```

For further information please refer to the help part of the corresponding
command.

@@ -125,4 +130,4 @@ matrix_3d = get_3d(dataset_id=DATASET_ID)

The development of this software was supported by funding to the Blue Brain Project, a research center of the École polytechnique fédérale de Lausanne (EPFL), from the Swiss government’s ETH Board of the Swiss Federal Institutes of Technology.

Copyright (c) 2021 Blue Brain Project/EPFL
Copyright (c) 2021-2022 Blue Brain Project/EPFL
Binary file modified docs/_images/banner.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
45 changes: 0 additions & 45 deletions requirements.txt

This file was deleted.

11 changes: 7 additions & 4 deletions setup.py
Original file line number Diff line number Diff line change
@@ -21,14 +21,11 @@
install_requires = [
"Pillow",
"appdirs",
"click",
"click>=8",
"dataclasses; python_version < '3.7'",
"matplotlib",
"numpy",
"opencv-python",
"pandas",
"requests",
"responses",
"scikit-image",
]
extras_require = {
@@ -48,6 +45,12 @@
"docs": ["sphinx>=1.3", "sphinx-bluebrain-theme"],
}

try:
# Could already be installed on the system
import cv2 # noqa
except ImportError:
install_requires.append("opencv-python")

description = "Search, download, and prepare atlas data."
long_description = """
Among different sources of data, Allen Brain Institute
109 changes: 0 additions & 109 deletions src/atldld/cli/dataset.py
Original file line number Diff line number Diff line change
@@ -15,7 +15,6 @@
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
"""Implementation of the "atldld dataset" subcommand."""
import pathlib
from typing import Any, Dict, Optional, Sequence

import click
@@ -111,114 +110,6 @@ def dataset_info(dataset_id):
click.secho(textwrap.dedent(output).strip(), fg="green")


@dataset_cmd.command(
"download", help="Download and synchronize an entire section dataset"
)
@click.argument("dataset_id", type=str)
@click.argument(
"output_folder",
type=click.Path(exists=False, dir_okay=True, path_type=pathlib.Path),
)
@click.option(
"--downsample-ref",
type=int,
default=25,
show_default=True,
help="Downsampling coefficient for the reference space. Determines the size "
"of the synchronized image.",
)
@click.option(
"--downsample-img",
type=int,
default=0,
show_default=True,
help="Downsampling coefficient for the image download.",
)
@click.option(
"-e",
"--include-expression",
is_flag=True,
help="Include expression image.",
)
def dataset_download(
dataset_id,
output_folder,
downsample_ref,
downsample_img,
include_expression,
):
"""Download and synchronize an entire section dataset."""
import json
import textwrap

from PIL import Image

from atldld.sync import DatasetDownloader, DatasetNotFoundError

# Prepare paths
if not output_folder.exists():
output_folder.mkdir(parents=True)
metadata_path = output_folder / "metadata.json"

downloader = DatasetDownloader(
dataset_id,
downsample_ref=downsample_ref,
downsample_img=downsample_img,
include_expression=include_expression,
)
cli_input = f"""
Dataset ID : {dataset_id}
Downsample reference : {downsample_ref}
Downsample image : {downsample_img}
Include expression : {include_expression}
Output folder : {output_folder}
"""
click.secho(textwrap.dedent(cli_input).strip(), fg="blue")

try:
downloader.fetch_metadata()
except DatasetNotFoundError as exc:
raise click.ClickException(str(exc))
n_images = len(downloader)

additional_info = f"""
Number of section images : {n_images}
Section thickness : {downloader.metadata["dataset"]["section_thickness"]}µm
Plane of section : {downloader.metadata["dataset"]["plane_of_section_id"]}
"""
click.secho(textwrap.dedent(additional_info).strip(), fg="green")

metadata = {
"dataset_id": dataset_id,
"downsample_ref": downsample_ref,
"downsample_img": downsample_img,
"plane_of_section": downloader.metadata["dataset"]["plane_of_section_id"],
"section_thickness": downloader.metadata["dataset"]["section_thickness"],
"per_image": {},
}

with click.progressbar(downloader.run(), length=n_images) as progress:
for image_id, section_coordinate, img, img_expr, df in progress:
img_synced = df.warp(img, c=img[0, 0].tolist())

img_path = output_folder / f"{image_id}.png"
Image.fromarray(img_synced, mode="RGB").save(img_path)

if img_expr is not None:
img_expr_synced = df.warp(img_expr)
img_expr_path = output_folder / f"{image_id}_expr.png"
Image.fromarray(img_expr_synced, mode="RGB").save(img_expr_path)

metadata["per_image"][image_id] = {
"section_coordinate": section_coordinate,
"section_coordinate_scaled": section_coordinate
/ metadata["section_thickness"],
}

with metadata_path.open("w") as f:
json.dump(metadata, f, indent=4)


@dataset_cmd.command("preview", help="Plot a preview of dataset slices")
@click.argument("dataset_id", type=int)
@click.option(
Loading