Skip to content

Commit 4a0aa84

Browse files
committed
[tools] Second take to fix bloat check
Apparently, in the previous PR I replaced the deprecated DataFrame.append with pandas.concat, but I didn't realize that our scripts add extra attributes to the data frame, which are not copied during the concatenation. This time, I verified the script locally using --github-comment and --github-comment-dryrun. Also, do not fetch submodules in the bloat check job as it takes most of the job's execution time.
1 parent cd51370 commit 4a0aa84

File tree

2 files changed

+2
-1
lines changed

2 files changed

+2
-1
lines changed

.github/workflows/bloat_check.yaml

-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,6 @@ jobs:
4141
with:
4242
action: actions/checkout@v3
4343
with: |
44-
submodules: true
4544
token: ${{ github.token }}
4645
attempt_limit: 3
4746
attempt_delay: 2000

scripts/tools/memory/gh_report.py

+2
Original file line numberDiff line numberDiff line change
@@ -375,8 +375,10 @@ def merge(df: pd.DataFrame, comment) -> pd.DataFrame:
375375
cols, rows = memdf.util.markdown.read_hierified(body)
376376
break
377377
logging.debug('REC: read %d rows', len(rows))
378+
attrs = df.attrs
378379
df = pd.concat([df, pd.DataFrame(data=rows, columns=cols).astype(df.dtypes)],
379380
ignore_index=True)
381+
df.attrs = attrs
380382
return df.sort_values(
381383
by=['platform', 'target', 'config', 'section']).drop_duplicates()
382384

0 commit comments

Comments
 (0)