重复标签#
Index
对象不要求唯一;你可以拥有重复的行标签或列标签。这起初可能有些令人困惑。如果你熟悉 SQL,你会知道行标签类似于表的 प्राइमरी की (主键),并且你绝不希望 SQL 表中出现重复。但 pandas 的一个作用是在数据进入下游系统之前清理混乱的真实世界数据。而真实世界的数据确实存在重复,即使在那些本应唯一的字段中也是如此。
本节描述了重复标签如何改变某些操作的行为,以及如何在操作过程中阻止重复产生,或在出现重复时检测它们。
In [1]: import pandas as pd
In [2]: import numpy as np
重复标签的后果#
一些 pandas 方法(例如 Series.reindex()
)在存在重复标签时无法工作。输出无法确定,因此 pandas 会引发错误。
In [3]: s1 = pd.Series([0, 1, 2], index=["a", "b", "b"])
In [4]: s1.reindex(["a", "b", "c"])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[4], line 1
----> 1 s1.reindex(["a", "b", "c"])
File ~/work/pandas/pandas/pandas/core/series.py:5153, in Series.reindex(self, index, axis, method, copy, level, fill_value, limit, tolerance)
5136 @doc(
5137 NDFrame.reindex, # type: ignore[has-type]
5138 klass=_shared_doc_kwargs["klass"],
(...)
5151 tolerance=None,
5152 ) -> Series:
-> 5153 return super().reindex(
5154 index=index,
5155 method=method,
5156 copy=copy,
5157 level=level,
5158 fill_value=fill_value,
5159 limit=limit,
5160 tolerance=tolerance,
5161 )
File ~/work/pandas/pandas/pandas/core/generic.py:5610, in NDFrame.reindex(self, labels, index, columns, axis, method, copy, level, fill_value, limit, tolerance)
5607 return self._reindex_multi(axes, copy, fill_value)
5609 # perform the reindex on the axes
-> 5610 return self._reindex_axes(
5611 axes, level, limit, tolerance, method, fill_value, copy
5612 ).__finalize__(self, method="reindex")
File ~/work/pandas/pandas/pandas/core/generic.py:5633, in NDFrame._reindex_axes(self, axes, level, limit, tolerance, method, fill_value, copy)
5630 continue
5632 ax = self._get_axis(a)
-> 5633 new_index, indexer = ax.reindex(
5634 labels, level=level, limit=limit, tolerance=tolerance, method=method
5635 )
5637 axis = self._get_axis_number(a)
5638 obj = obj._reindex_with_indexers(
5639 {axis: [new_index, indexer]},
5640 fill_value=fill_value,
5641 copy=copy,
5642 allow_dups=False,
5643 )
File ~/work/pandas/pandas/pandas/core/indexes/base.py:4429, in Index.reindex(self, target, method, level, limit, tolerance)
4426 raise ValueError("cannot handle a non-unique multi-index!")
4427 elif not self.is_unique:
4428 # GH#42568
-> 4429 raise ValueError("cannot reindex on an axis with duplicate labels")
4430 else:
4431 indexer, _ = self.get_indexer_non_unique(target)
ValueError: cannot reindex on an axis with duplicate labels
其他方法,如索引,可能会给出非常令人惊讶的结果。通常使用标量进行索引会降低维度。使用标量对 DataFrame
进行切片将返回一个 Series
。使用标量对 Series
进行切片将返回一个标量。但如果存在重复标签,情况并非如此。
In [5]: df1 = pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "A", "B"])
In [6]: df1
Out[6]:
A A B
0 0 1 2
1 3 4 5
我们在列中存在重复。如果我们对 'B'
进行切片,会得到一个 Series
In [7]: df1["B"] # a series
Out[7]:
0 2
1 5
Name: B, dtype: int64
但对 'A'
进行切片会返回一个 DataFrame
In [8]: df1["A"] # a DataFrame
Out[8]:
A A
0 0 1
1 3 4
这也适用于行标签
In [9]: df2 = pd.DataFrame({"A": [0, 1, 2]}, index=["a", "a", "b"])
In [10]: df2
Out[10]:
A
a 0
a 1
b 2
In [11]: df2.loc["b", "A"] # a scalar
Out[11]: 2
In [12]: df2.loc["a", "A"] # a Series
Out[12]:
a 0
a 1
Name: A, dtype: int64
重复标签检测#
你可以使用 Index.is_unique
检查 Index
(存储行或列标签)是否唯一
In [13]: df2
Out[13]:
A
a 0
a 1
b 2
In [14]: df2.index.is_unique
Out[14]: False
In [15]: df2.columns.is_unique
Out[15]: True
注意
对于大型数据集,检查索引是否唯一相对耗时。pandas 会缓存此结果,因此在同一索引上重复检查非常快速。
Index.duplicated()
将返回一个布尔型 ndarray,指示标签是否重复。
In [16]: df2.index.duplicated()
Out[16]: array([False, True, False])
这可以作为布尔型过滤器来删除重复的行。
In [17]: df2.loc[~df2.index.duplicated(), :]
Out[17]:
A
a 0
b 2
如果你需要额外的逻辑来处理重复标签,而不仅仅是删除重复项,那么对索引使用 groupby()
是一个常用技巧。例如,我们将通过取所有具有相同标签的行的平均值来解决重复问题。
In [18]: df2.groupby(level=0).mean()
Out[18]:
A
a 0.5
b 2.0
禁止重复标签#
版本 1.2.0 新增。
如上所述,处理重复项是读取原始数据时的一个重要特性。尽管如此,你可能希望在数据处理管道中(通过 pandas.concat()
, rename()
等方法)避免引入重复项。通过调用 .set_flags(allows_duplicate_labels=False)
,Series
和 DataFrame
都禁止重复标签(默认是允许)。如果存在重复标签,将引发异常。
In [19]: pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)
---------------------------------------------------------------------------
DuplicateLabelError Traceback (most recent call last)
Cell In[19], line 1
----> 1 pd.Series([0, 1, 2], index=["a", "b", "b"]).set_flags(allows_duplicate_labels=False)
File ~/work/pandas/pandas/pandas/core/generic.py:508, in NDFrame.set_flags(self, copy, allows_duplicate_labels)
506 df = self.copy(deep=copy and not using_copy_on_write())
507 if allows_duplicate_labels is not None:
--> 508 df.flags["allows_duplicate_labels"] = allows_duplicate_labels
509 return df
File ~/work/pandas/pandas/pandas/core/flags.py:109, in Flags.__setitem__(self, key, value)
107 if key not in self._keys:
108 raise ValueError(f"Unknown flag {key}. Must be one of {self._keys}")
--> 109 setattr(self, key, value)
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
94 if not value:
95 for ax in obj.axes:
---> 96 ax._maybe_check_unique()
98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
712 duplicates = self._format_duplicate_message()
713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
positions
label
b [1, 2]
这适用于 DataFrame
的行标签和列标签
In [20]: pd.DataFrame([[0, 1, 2], [3, 4, 5]], columns=["A", "B", "C"],).set_flags(
....: allows_duplicate_labels=False
....: )
....:
Out[20]:
A B C
0 0 1 2
1 3 4 5
此属性可以通过 allows_duplicate_labels
进行检查或设置,它指示该对象是否可以有重复标签。
In [21]: df = pd.DataFrame({"A": [0, 1, 2, 3]}, index=["x", "y", "X", "Y"]).set_flags(
....: allows_duplicate_labels=False
....: )
....:
In [22]: df
Out[22]:
A
x 0
y 1
X 2
Y 3
In [23]: df.flags.allows_duplicate_labels
Out[23]: False
DataFrame.set_flags()
可用于返回一个新的 DataFrame
,其属性(如 allows_duplicate_labels
)设置为某个值
In [24]: df2 = df.set_flags(allows_duplicate_labels=True)
In [25]: df2.flags.allows_duplicate_labels
Out[25]: True
返回的新 DataFrame
是旧 DataFrame
数据的视图。或者也可以直接在同一对象上设置该属性
In [26]: df2.flags.allows_duplicate_labels = False
In [27]: df2.flags.allows_duplicate_labels
Out[27]: False
在处理原始的混乱数据时,你可能最初读取混乱数据(其中可能存在重复标签),去重,然后禁止将来出现重复,以确保你的数据管道不会引入重复。
>>> raw = pd.read_csv("...")
>>> deduplicated = raw.groupby(level=0).first() # remove duplicates
>>> deduplicated.flags.allows_duplicate_labels = False # disallow going forward
在包含重复标签的 Series
或 DataFrame
上设置 allows_duplicate_labels=False
,或在禁止重复的 Series
或 DataFrame
上执行引入重复标签的操作,都将引发 errors.DuplicateLabelError
。
In [28]: df.rename(str.upper)
---------------------------------------------------------------------------
DuplicateLabelError Traceback (most recent call last)
Cell In[28], line 1
----> 1 df.rename(str.upper)
File ~/work/pandas/pandas/pandas/core/frame.py:5767, in DataFrame.rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
5636 def rename(
5637 self,
5638 mapper: Renamer | None = None,
(...)
5646 errors: IgnoreRaise = "ignore",
5647 ) -> DataFrame | None:
5648 """
5649 Rename columns or index labels.
5650
(...)
5765 4 3 6
5766 """
-> 5767 return super()._rename(
5768 mapper=mapper,
5769 index=index,
5770 columns=columns,
5771 axis=axis,
5772 copy=copy,
5773 inplace=inplace,
5774 level=level,
5775 errors=errors,
5776 )
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
1138 return None
1139 else:
-> 1140 return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
6255 if other.attrs:
6256 # We want attrs propagation to have minimal performance
6257 # impact if attrs are not used; i.e. attrs is an empty dict.
6258 # One could make the deepcopy unconditionally, but a deepcopy
6259 # of an empty dict is 50x more expensive than the empty check.
6260 self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
6263 # For subclasses using _metadata.
6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
94 if not value:
95 for ax in obj.axes:
---> 96 ax._maybe_check_unique()
98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
712 duplicates = self._format_duplicate_message()
713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
positions
label
X [0, 2]
Y [1, 3]
此错误消息包含重复的标签,以及 Series
或 DataFrame
中所有重复项(包括“原始项”)的数字位置
重复标签传播#
通常,禁止重复是“粘性”的。它会通过操作得到保留。
In [29]: s1 = pd.Series(0, index=["a", "b"]).set_flags(allows_duplicate_labels=False)
In [30]: s1
Out[30]:
a 0
b 0
dtype: int64
In [31]: s1.head().rename({"a": "b"})
---------------------------------------------------------------------------
DuplicateLabelError Traceback (most recent call last)
Cell In[31], line 1
----> 1 s1.head().rename({"a": "b"})
File ~/work/pandas/pandas/pandas/core/series.py:5090, in Series.rename(self, index, axis, copy, inplace, level, errors)
5083 axis = self._get_axis_number(axis)
5085 if callable(index) or is_dict_like(index):
5086 # error: Argument 1 to "_rename" of "NDFrame" has incompatible
5087 # type "Union[Union[Mapping[Any, Hashable], Callable[[Any],
5088 # Hashable]], Hashable, None]"; expected "Union[Mapping[Any,
5089 # Hashable], Callable[[Any], Hashable], None]"
-> 5090 return super()._rename(
5091 index, # type: ignore[arg-type]
5092 copy=copy,
5093 inplace=inplace,
5094 level=level,
5095 errors=errors,
5096 )
5097 else:
5098 return self._set_name(index, inplace=inplace, deep=copy)
File ~/work/pandas/pandas/pandas/core/generic.py:1140, in NDFrame._rename(self, mapper, index, columns, axis, copy, inplace, level, errors)
1138 return None
1139 else:
-> 1140 return result.__finalize__(self, method="rename")
File ~/work/pandas/pandas/pandas/core/generic.py:6262, in NDFrame.__finalize__(self, other, method, **kwargs)
6255 if other.attrs:
6256 # We want attrs propagation to have minimal performance
6257 # impact if attrs are not used; i.e. attrs is an empty dict.
6258 # One could make the deepcopy unconditionally, but a deepcopy
6259 # of an empty dict is 50x more expensive than the empty check.
6260 self.attrs = deepcopy(other.attrs)
-> 6262 self.flags.allows_duplicate_labels = other.flags.allows_duplicate_labels
6263 # For subclasses using _metadata.
6264 for name in set(self._metadata) & set(other._metadata):
File ~/work/pandas/pandas/pandas/core/flags.py:96, in Flags.allows_duplicate_labels(self, value)
94 if not value:
95 for ax in obj.axes:
---> 96 ax._maybe_check_unique()
98 self._allows_duplicate_labels = value
File ~/work/pandas/pandas/pandas/core/indexes/base.py:715, in Index._maybe_check_unique(self)
712 duplicates = self._format_duplicate_message()
713 msg += f"\n{duplicates}"
--> 715 raise DuplicateLabelError(msg)
DuplicateLabelError: Index has duplicates.
positions
label
b [0, 1]
警告
这是一个实验性功能。目前,许多方法未能传播 allows_duplicate_labels
值。在未来的版本中,预计每个接收或返回一个或多个 DataFrame 或 Series 对象的方法都将传播 allows_duplicate_labels
。