Description
Hello,
Problem description
I have two pandas dataframes that I want to join using pandas (1.1.12).
However, when I join them, "alerts", in the table B gets its type changed from int64 to float64 ("alerts_cards"):
(Pdb) A.dtypes
threshold_step int64
precision float64
alerts int64
alerted_money float64
dtype: object
(Pdb) B.dtypes
threshold_step int64
precision float64
alerts int64
dtype: object
(Pdb) A.join(B, on="threshold_step", rsuffix="_cards", sort=True).dtypes
threshold_step int64
precision float64
alerts int64
alerted_money float64
threshold_step_cards float64
precision_cards float64
alerts_cards float64
dtype: object
I usually then remove the join key ("threshold_step_cards"), but now I'm noticing it became a float as well.
The join key column has the same entries in both tables (all integers in range 0 to 100) and there are no NaN's in my dataframes.
Expected Output
The join should preserve the datatypes of the original columns. I spent several hours looking for a bug in my code until with the debugger I found it came from this pandas join.
Output of pd.show_versions()
INSTALLED VERSIONS
commit : 2a7d332
python : 3.7.6.final.0
python-bits : 64
OS : Linux
OS-release : 4.15.0-118-generic
Version : #119-Ubuntu SMP Tue Sep 8 12:30:01 UTC 2020
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : pt_PT.UTF-8
pandas : 1.1.2
numpy : 1.18.2
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 46.2.0.post20200511
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.13.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.2.1
numexpr : None
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.4.1
sqlalchemy : 1.3.16
tables : None
tabulate : None
xarray : None
xlrd : 1.2.0
xlwt : None
numba : None
Thank you
Activity
[-]BUG: Join changes type[/-][+]BUG: Join changes dtypes[/+]AlbertoEAF commentedon Sep 26, 2020
Update:
I had to do:
Otherwise it would use the index even though I set
on
as the join column.I'm reading the docs and it says this about join's
on
parameter:on: str, list of str, or array-like, optional Column or index level name(s) in the caller to join on the index in other, otherwise joins index-on-index.
Meaning there's no way to join by column names on both sides? One must always set the index?
dimithras commentedon Sep 26, 2020
Could you please provide a code sample defining A and B? I tried reproducing the issue yet I get expected results.
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : d9fff27
python : 3.7.3.final.0
python-bits : 64
OS : Windows
OS-release : 10
Version : 10.0.18362
machine : AMD64
processor : Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.1.0
numpy : 1.19.1
pytz : 2020.1
dateutil : 2.8.0
pip : 20.2.3
setuptools : 40.8.0
Cython : 0.29.21
pytest : 6.0.1
hypothesis : 5.36.0
sphinx : 3.2.1
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.3.4
html5lib : 1.1
pymysql : None
psycopg2 : None
jinja2 : 2.10.1
IPython : 7.18.1
pandas_datareader: None
bs4 : 4.9.1
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : 3.3.0
numexpr : None
odfpy : None
openpyxl : 3.0.4
pandas_gbq : None
pyarrow : None
pytables : None
pyxlsb : None
s3fs : None
scipy : 1.5.2
sqlalchemy : 1.3.19
tables : None
tabulate : 0.8.7
xarray : None
xlrd : 1.2.0
xlwt : None
numba : None
dimithras commentedon Sep 26, 2020
Tried it also on pandas 1.1.2, getting expected output.
rhshadrach commentedon Sep 26, 2020
@AlbertoEAF It looks like you're wanting to use
merge
instead ofjoin
. pandasjoin
is specifically for joining on other's index, never a column. When you specify theon
argument, you are joining that column of the caller to the index of other.Likely this created missing values (NaN), which coerced the dtype to being a float.
AlbertoEAF commentedon Sep 26, 2020
Hello,
There are no NaN's, however as extra-processing for normalization of the output shape I do add extra rows to fill all the integers in a range (let's say 100) and then fill the table with the statistics for all those points. This is that pre-processing step at which the indexes go out of sync:
This "extend thresholds" call results in repeated indices (don't know how thats possible):
which then breaks the join unles I set "threshold_step" as index for both tables.
However, not only does
merge
not allow usingrsuffix
to disambiguate my columns, but also results in broken dtypes.rhshadrach commentedon Sep 26, 2020
After the call to
concat
withinextend_table_to_all_thresholds
, there are null values within thealerts
column ofdf
. This coerces the dtype to be float. Filling in these values with0
, they still remain floats. You can use.astype(int)
after the fillna to make them integers again.AlbertoEAF commentedon Sep 26, 2020
You're right @rhshadrach, my bad, but in my real code I actually generate those columns after not before and the only way it works is with the indices.
Even so, when I make the "dummy" example similar to my code I don't see the bug here and I cannot send you the real code for it needs a lot of data and it's a huge script. Well I guess we can close this as I'm not being able to reproduce it here with minimal code 🤷 .
Regarding the index in the image above though, does it make sense to be repeated? 0, 1, 0, 1, 2, .. ?
Thanks for your time @rhshadrach @dimithras ;)
rhshadrach commentedon Sep 26, 2020
It looks like the index
[0, 1, 0, 1, 2, ...]
is from usingconcat
. This will stack the two DataFrames on top of either other, indices and all. The first[0, 1]
is fromA
, the rest of the indices are frommissing_threshold_steps
.3 remaining items