Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -239,7 +239,7 @@ There is currently only one GUC parameter to enable/disable the `pg_parquet`:
> * `numeric(9 < P <= 18, S)` is represented as `INT64` with `DECIMAL` logical type
> * `numeric(18 < P <= 38, S)` is represented as `FIXED_LEN_BYTE_ARRAY(9-16)` with `DECIMAL` logical type
> * `numeric(38 < P, S)` is represented as `BYTE_ARRAY` with `STRING` logical type
> * `numeric` is allowed by Postgres. (precision and scale not specified). These are represented by a default precision (38) and scale (16) instead of writing them as string. You get runtime error if your table tries to read or write a numeric value which is not allowed by the default precision and scale (22 integral digits before decimal point, 16 digits after decimal point).
> * `numeric` is allowed by Postgres. (precision and scale not specified). These are represented by a default precision (38) and scale (9) instead of writing them as string. You get runtime error if your table tries to read or write a numeric value which is not allowed by the default precision and scale (29 integral digits before decimal point, 9 digits after decimal point).
> - (2) The `date` type is represented according to `Unix epoch` when writing to Parquet files. It is converted back according to `PostgreSQL epoch` when reading from Parquet files.
> - (3) The `timestamptz` and `timetz` types are adjusted to `UTC` when writing to Parquet files. They are converted back with `UTC` timezone when reading from Parquet files.
> - (4) The `geometry` type is represented as `BYTE_ARRAY` encoded as `WKB` when `postgis` extension is created. Otherwise, it is represented as `BYTE_ARRAY` with `STRING` logical type.
Expand Down
4 changes: 2 additions & 2 deletions src/pgrx_tests/copy_type_roundtrip.rs
Original file line number Diff line number Diff line change
Expand Up @@ -863,7 +863,7 @@ mod tests {

#[pg_test]
#[should_panic(
expected = "numeric value contains 23 digits before decimal point, which exceeds max allowed integral digits 22 during copy to parquet"
expected = "numeric value contains 30 digits before decimal point, which exceeds max allowed integral digits 29 during copy to parquet"
)]
fn test_invalid_unbounded_numeric_integral_digits() {
let invalid_integral_digits =
Expand All @@ -879,7 +879,7 @@ mod tests {

#[pg_test]
#[should_panic(
expected = "numeric value contains 17 digits after decimal point, which exceeds max allowed decimal digits 16 during copy to parquet"
expected = "numeric value contains 10 digits after decimal point, which exceeds max allowed decimal digits 9 during copy to parquet"
)]
fn test_invalid_unbounded_numeric_decimal_digits() {
let invalid_decimal_digits = DEFAULT_UNBOUNDED_NUMERIC_SCALE + 1;
Expand Down
2 changes: 1 addition & 1 deletion src/type_compat/pg_arrow_type_conversions.rs
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ fn rescale_unbounded_numeric_or_error(

const MAX_NUMERIC_PRECISION: u32 = 38;
pub(crate) const DEFAULT_UNBOUNDED_NUMERIC_PRECISION: u32 = MAX_NUMERIC_PRECISION;
pub(crate) const DEFAULT_UNBOUNDED_NUMERIC_SCALE: u32 = 16;
pub(crate) const DEFAULT_UNBOUNDED_NUMERIC_SCALE: u32 = 9;
pub(crate) const DEFAULT_UNBOUNDED_NUMERIC_MAX_INTEGRAL_DIGITS: u32 =
DEFAULT_UNBOUNDED_NUMERIC_PRECISION - DEFAULT_UNBOUNDED_NUMERIC_SCALE;

Expand Down
Loading