Skip to content

Commit 669dca9

Browse files
[Spark] Improve Delta Protocol Transitions (#2848)
<!-- Thanks for sending a pull request! Here are some tips for you: 1. If this is your first time, please read our contributor guidelines: https://github.com/delta-io/delta/blob/master/CONTRIBUTING.md 2. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP] Your PR title ...'. 3. Be sure to keep the PR description updated to reflect all changes. 4. Please write your PR title to summarize what this PR proposes. 5. If possible, provide a concise example to reproduce the issue for a faster review. 6. If applicable, include the corresponding issue number in the PR title and link it in the body. --> #### Which Delta project/connector is this regarding? <!-- Please add the component selected below to the beginning of the pull request title For example: [Spark] Title of my pull request --> - [x] Spark - [ ] Standalone - [ ] Flink - [ ] Kernel - [ ] Other (fill in here) ## Description <!-- - Describe what this PR changes. - Describe why we need the change. If this PR resolves an issue be sure to include "Resolves #XXX" to correctly link and close the issue upon merge. --> Currently, protocol transitions can be hard to manage. A few examples: - It is hard to predict the output of certain operations. - Once a legacy protocol transitions to a Table Features protocol it is quite hard to transition back to a legacy protocol. - Adding a feature in a protocol and then removing it might lead to a different protocol. - Adding an explicit feature to a legacy protocol always leads to a table features protocol although it might not be necessary. - Dropping features from legacy protocols is not supported. As a result, the order the features are dropped matters. - Default protocol versions are ignored in some cases. - Enabling table features by default results in feature loss in legacy protocols. - CREATE TABLE ignores any legacy versions set if there is also a table feature in the definition. This PR proposes several protocol transition improvements in order to simplify user journeys. The high level proposal is the following: Two protocol representations with singular operational semantics. This means that we have two ways to represent a protocol: a) The legacy representation and b) the table features representation. The latter representation is more powerful than the former, i.e the table features representation can represent all legacy protocols but the opposite is not true. This is followed by three simple rules: 1. All operations should be allowed to be performed on both protocol representations and should yield equivalent results. 2. The result should always be represented with the weaker form when possible. 3. Conversely, if the result of an operation on a legacy protocol cannot be represented with the legacy representation, use the Table Features representation. **The PR introduces the following behavioural changes:** 1. Now all protocol operations are followed by denormalisation and then normalisation. Up to now, normalisation would only be performed after dropping a features. 2. Legacy features can now be dropped directly from a legacy protocol. The result is represented with table features if it cannot be represented with a legacy protocol. 3. Operations on table feature protocols now take into account the default versions. For example, enabling deletion vectors on table results to protocol `(3, 7, AppendOnly, Invariants, DeletionVectors)`. 5. Operations on table feature protocols now take into account any protocol versions set on the table. For example, creating a table with protocol `(1, 3)` and deletion vectors results to protocol `(3, 7, AppendOnly, Invariants, CheckConstraints, DeletionVectors)`. 6. It is not possible now to have a table features protocol without table features. For example, creating a table with `(3, 7)` and no table features is now normalised to `(1, 1)`. 7. Column Mapping can now be automatically enabled on legacy protocols when the mode is changed explicitly. ## How was this patch tested? Added `DeltaProtocolTransitionsSuite`. Also modified existing tests in `DeltaProtocolVersionSuite`. <!-- If tests were added, say they were added here. Please make sure to test the changes thoroughly including negative and positive cases if possible. If the changes were tested in any way other than unit tests, please clarify how you tested step by step (ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future). If the changes were not tested, please explain why. --> ## Does this PR introduce _any_ user-facing changes? <!-- If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible. If possible, please also clarify if this is a user-facing change compared to the released Delta Lake versions or within the unreleased branches such as master. If no, write 'No'. --> Yes.
1 parent 4430dc1 commit 669dca9

22 files changed

+1142
-515
lines changed

spark/src/main/resources/error/delta-error-classes.json

Lines changed: 0 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2514,20 +2514,6 @@
25142514
],
25152515
"sqlState" : "0AKDC"
25162516
},
2517-
"DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL" : {
2518-
"message" : [
2519-
"",
2520-
"Your current table protocol version does not support changing column mapping modes",
2521-
"using <config>.",
2522-
"",
2523-
"Required Delta protocol version for column mapping:",
2524-
"<requiredVersion>",
2525-
"Your table's current Delta protocol version:",
2526-
"<currentVersion>",
2527-
"<advice>"
2528-
],
2529-
"sqlState" : "KD004"
2530-
},
25312517
"DELTA_UNSUPPORTED_COLUMN_MAPPING_SCHEMA_CHANGE" : {
25322518
"message" : [
25332519
"",

spark/src/main/scala/org/apache/spark/sql/delta/DeltaColumnMapping.scala

Lines changed: 3 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -86,9 +86,6 @@ trait DeltaColumnMappingBase extends DeltaLogging {
8686
RowIdMetadataStructField.isRowIdColumn(field) ||
8787
RowCommitVersion.MetadataStructField.isRowCommitVersionColumn(field)
8888

89-
def satisfiesColumnMappingProtocol(protocol: Protocol): Boolean =
90-
protocol.isFeatureSupported(ColumnMappingTableFeature)
91-
9289
/**
9390
* Allow NameMapping -> NoMapping transition behind a feature flag.
9491
* Otherwise only NoMapping -> NameMapping is allowed.
@@ -134,33 +131,9 @@ trait DeltaColumnMappingBase extends DeltaLogging {
134131
}
135132

136133
val isChangingModeOnExistingTable = oldMappingMode != newMappingMode && !isCreatingNewTable
137-
if (isChangingModeOnExistingTable) {
138-
if (!allowMappingModeChange(oldMappingMode, newMappingMode)) {
139-
throw DeltaErrors.changeColumnMappingModeNotSupported(
140-
oldMappingMode.name, newMappingMode.name)
141-
} else {
142-
// legal mode change, now check if protocol is upgraded before or part of this txn
143-
val caseInsensitiveMap = CaseInsensitiveMap(newMetadata.configuration)
144-
val minReaderVersion = caseInsensitiveMap
145-
.get(Protocol.MIN_READER_VERSION_PROP).map(_.toInt)
146-
.getOrElse(oldProtocol.minReaderVersion)
147-
val minWriterVersion = caseInsensitiveMap
148-
.get(Protocol.MIN_WRITER_VERSION_PROP).map(_.toInt)
149-
.getOrElse(oldProtocol.minWriterVersion)
150-
var newProtocol = Protocol(minReaderVersion, minWriterVersion)
151-
val satisfiesWriterVersion = minWriterVersion >= ColumnMappingTableFeature.minWriterVersion
152-
val satisfiesReaderVersion = minReaderVersion >= ColumnMappingTableFeature.minReaderVersion
153-
// This is an OR check because `readerFeatures` and `writerFeatures` can independently
154-
// support table features.
155-
if ((newProtocol.supportsReaderFeatures && satisfiesWriterVersion) ||
156-
(newProtocol.supportsWriterFeatures && satisfiesReaderVersion)) {
157-
newProtocol = newProtocol.withFeature(ColumnMappingTableFeature)
158-
}
159-
160-
if (!satisfiesColumnMappingProtocol(newProtocol)) {
161-
throw DeltaErrors.changeColumnMappingModeOnOldProtocol(oldProtocol)
162-
}
163-
}
134+
if (isChangingModeOnExistingTable && !allowMappingModeChange(oldMappingMode, newMappingMode)) {
135+
throw DeltaErrors.changeColumnMappingModeNotSupported(
136+
oldMappingMode.name, newMappingMode.name)
164137
}
165138

166139
val updatedMetadata = updateColumnMappingMetadata(

spark/src/main/scala/org/apache/spark/sql/delta/DeltaErrors.scala

Lines changed: 5 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -2043,41 +2043,18 @@ trait DeltaErrorsBase
20432043
mode.name))
20442044
}
20452045

2046-
def changeColumnMappingModeOnOldProtocol(oldProtocol: Protocol): Throwable = {
2047-
val requiredProtocol = {
2048-
if (oldProtocol.supportsReaderFeatures || oldProtocol.supportsWriterFeatures) {
2049-
Protocol(
2050-
TableFeatureProtocolUtils.TABLE_FEATURES_MIN_READER_VERSION,
2051-
TableFeatureProtocolUtils.TABLE_FEATURES_MIN_WRITER_VERSION)
2052-
.withFeature(ColumnMappingTableFeature)
2053-
} else {
2054-
ColumnMappingTableFeature.minProtocolVersion
2055-
}
2056-
}
2057-
2058-
new DeltaColumnMappingUnsupportedException(
2059-
errorClass = "DELTA_UNSUPPORTED_COLUMN_MAPPING_PROTOCOL",
2060-
messageParameters = Array(
2061-
s"${DeltaConfigs.COLUMN_MAPPING_MODE.key}",
2062-
s"$requiredProtocol",
2063-
s"$oldProtocol",
2064-
columnMappingAdviceMessage(requiredProtocol)))
2065-
}
2066-
2067-
private def columnMappingAdviceMessage(
2046+
protected def columnMappingAdviceMessage(
20682047
requiredProtocol: Protocol = ColumnMappingTableFeature.minProtocolVersion): String = {
2048+
val readerVersion = requiredProtocol.minReaderVersion
2049+
val writerVersion = requiredProtocol.minWriterVersion
20692050
s"""
20702051
|Please enable Column Mapping on your Delta table with mapping mode 'name'.
20712052
|You can use one of the following commands.
20722053
|
2073-
|If your table is already on the required protocol version:
20742054
|ALTER TABLE table_name SET TBLPROPERTIES ('delta.columnMapping.mode' = 'name')
20752055
|
2076-
|If your table is not on the required protocol version and requires a protocol upgrade:
2077-
|ALTER TABLE table_name SET TBLPROPERTIES (
2078-
| 'delta.columnMapping.mode' = 'name',
2079-
| 'delta.minReaderVersion' = '${requiredProtocol.minReaderVersion}',
2080-
| 'delta.minWriterVersion' = '${requiredProtocol.minWriterVersion}')
2056+
|Note, if your table is not on the required protocol version it will be upgraded.
2057+
|Column mapping requires at least protocol ($readerVersion, $writerVersion)
20812058
|""".stripMargin
20822059
}
20832060

spark/src/main/scala/org/apache/spark/sql/delta/OptimisticTransaction.scala

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -538,6 +538,22 @@ trait OptimisticTransactionImpl extends TransactionalWrite
538538

539539
val newProtocolForLatestMetadata =
540540
Protocol(readerVersionAsTableProp, writerVersionAsTableProp)
541+
542+
// The user-supplied protocol version numbers are treated as a group of features
543+
// that must all be enabled. This ensures that the feature-enabling behavior is the
544+
// same on Table Features-enabled protocols as on legacy protocols, i.e., exactly
545+
// the same set of features are enabled.
546+
//
547+
// This is useful for supporting protocol downgrades to legacy protocol versions.
548+
// When the protocol versions are explicitly set on table features protocol we may
549+
// normalize to legacy protocol versions. Legacy protocol versions can only be
550+
// used if a table supports *exactly* the set of features in that legacy protocol
551+
// version, with no "gaps". By merging in the protocol features from a particular
552+
// protocol version, we may end up with such a "gap-free" protocol. E.g. if a table
553+
// has only table feature "checkConstraints" (added by writer protocol version 3)
554+
// but not "invariants" and "appendOnly", then setting the minWriterVersion to
555+
// 2 or 3 will add "invariants" and "appendOnly", filling in the gaps for writer
556+
// protocol version 3, and then we can downgrade to version 3.
541557
val proposedNewProtocol = protocolBeforeUpdate.merge(newProtocolForLatestMetadata)
542558

543559
if (proposedNewProtocol != protocolBeforeUpdate) {
@@ -620,16 +636,14 @@ trait OptimisticTransactionImpl extends TransactionalWrite
620636
Protocol(
621637
readerVersionForNewProtocol,
622638
TableFeatureProtocolUtils.TABLE_FEATURES_MIN_WRITER_VERSION)
623-
.merge(newProtocolBeforeAddingFeatures)
624-
.withFeatures(newFeaturesFromTableConf))
639+
.withFeatures(newFeaturesFromTableConf)
640+
.merge(newProtocolBeforeAddingFeatures))
625641
}
626642

627643
// We are done with protocol versions and features, time to remove related table properties.
628644
val configsWithoutProtocolProps = newMetadataTmp.configuration.filterNot {
629645
case (k, _) => TableFeatureProtocolUtils.isTableProtocolProperty(k)
630646
}
631-
newMetadataTmp = newMetadataTmp.copy(configuration = configsWithoutProtocolProps)
632-
633647
// Table features Part 3: add automatically-enabled features by looking at the new table
634648
// metadata.
635649
//
@@ -639,6 +653,9 @@ trait OptimisticTransactionImpl extends TransactionalWrite
639653
setNewProtocolWithFeaturesEnabledByMetadata(newMetadataTmp)
640654
}
641655

656+
newMetadataTmp = newMetadataTmp.copy(configuration = configsWithoutProtocolProps)
657+
Protocol.assertMetadataContainsNoProtocolProps(newMetadataTmp)
658+
642659
newMetadataTmp = MaterializedRowId.updateMaterializedColumnName(
643660
protocol, oldMetadata = snapshot.metadata, newMetadataTmp)
644661
newMetadataTmp = MaterializedRowCommitVersion.updateMaterializedColumnName(

spark/src/main/scala/org/apache/spark/sql/delta/TableFeature.scala

Lines changed: 14 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -332,8 +332,17 @@ object TableFeature {
332332
* Warning: Do not call `get` on this Map to get a specific feature because keys in this map are
333333
* in lower cases. Use [[featureNameToFeature]] instead.
334334
*/
335-
private[delta] val allSupportedFeaturesMap: Map[String, TableFeature] = {
336-
var features: Set[TableFeature] = Set(
335+
private[delta] def allSupportedFeaturesMap: Map[String, TableFeature] = {
336+
val testingFeaturesEnabled =
337+
try {
338+
SparkSession
339+
.getActiveSession
340+
.map(_.conf.get(DeltaSQLConf.TABLE_FEATURES_TEST_FEATURES_ENABLED))
341+
.getOrElse(true)
342+
} catch {
343+
case _ => true
344+
}
345+
var features: Set[TableFeature] = Set(
337346
AllowColumnDefaultsTableFeature,
338347
AppendOnlyTableFeature,
339348
ChangeDataFeedTableFeature,
@@ -355,7 +364,7 @@ object TableFeature {
355364
InCommitTimestampTableFeature,
356365
VariantTypeTableFeature,
357366
CoordinatedCommitsTableFeature)
358-
if (DeltaUtils.isTesting) {
367+
if (DeltaUtils.isTesting && testingFeaturesEnabled) {
359368
features ++= Set(
360369
TestLegacyWriterFeature,
361370
TestLegacyReaderWriterFeature,
@@ -405,8 +414,8 @@ object TableFeature {
405414
protected def getDroppedExplicitFeatureNames(
406415
newProtocol: Protocol,
407416
oldProtocol: Protocol): Option[Set[String]] = {
408-
val newFeatureNames = newProtocol.readerAndWriterFeatureNames
409-
val oldFeatureNames = oldProtocol.readerAndWriterFeatureNames
417+
val newFeatureNames = newProtocol.implicitlyAndExplicitlySupportedFeatures.map(_.name)
418+
val oldFeatureNames = oldProtocol.implicitlyAndExplicitlySupportedFeatures.map(_.name)
410419
Option(oldFeatureNames -- newFeatureNames).filter(_.nonEmpty)
411420
}
412421

spark/src/main/scala/org/apache/spark/sql/delta/actions/TableFeatureSupport.scala

Lines changed: 61 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -22,10 +22,10 @@ import scala.collection.mutable
2222

2323
import org.apache.spark.sql.delta._
2424
import org.apache.spark.sql.delta.DeltaOperations.Operation
25+
import org.apache.spark.sql.delta.actions.TableFeatureProtocolUtils.TABLE_FEATURES_MIN_WRITER_VERSION
26+
import org.apache.spark.sql.delta.sources.DeltaSQLConf
2527
import com.fasterxml.jackson.annotation.JsonIgnore
2628

27-
import org.apache.spark.sql.SparkSession
28-
2929
/**
3030
* Trait to be mixed into the [[Protocol]] case class to enable Table Features.
3131
*
@@ -229,25 +229,16 @@ trait TableFeatureSupport { this: Protocol =>
229229

230230
/**
231231
* Determine whether this protocol can be safely upgraded to a new protocol `to`. This means:
232-
* - this protocol has reader protocol version less than or equals to `to`.
233-
* - this protocol has writer protocol version less than or equals to `to`.
234232
* - all features supported by this protocol are supported by `to`.
235233
*
236234
* Examples regarding feature status:
237-
* - from `[appendOnly]` to `[appendOnly]` => allowed
238-
* - from `[appendOnly, changeDataFeed]` to `[appendOnly]` => not allowed
239-
* - from `[appendOnly]` to `[appendOnly, changeDataFeed]` => allowed
235+
* - from `[appendOnly]` to `[appendOnly]` => allowed.
236+
* - from `[appendOnly, changeDataFeed]` to `[appendOnly]` => not allowed.
237+
* - from `[appendOnly]` to `[appendOnly, changeDataFeed]` => allowed.
240238
*/
241-
def canUpgradeTo(to: Protocol): Boolean = {
242-
if (to.minReaderVersion < this.minReaderVersion) return false
243-
if (to.minWriterVersion < this.minWriterVersion) return false
244-
245-
val thisFeatures =
246-
this.readerAndWriterFeatureNames ++ this.implicitlySupportedFeatures.map(_.name)
247-
val toFeatures = to.readerAndWriterFeatureNames ++ to.implicitlySupportedFeatures.map(_.name)
248-
// all features supported by `this` are supported by `to`
249-
thisFeatures.subsetOf(toFeatures)
250-
}
239+
def canUpgradeTo(to: Protocol): Boolean =
240+
// All features supported by `this` are supported by `to`.
241+
implicitlyAndExplicitlySupportedFeatures.subsetOf(to.implicitlyAndExplicitlySupportedFeatures)
251242

252243
/**
253244
* Determine whether this protocol can be safely downgraded to a new protocol `to`.
@@ -287,12 +278,14 @@ trait TableFeatureSupport { this: Protocol =>
287278
val mergedProtocol = Protocol(mergedReaderVersion, mergedWriterVersion)
288279
.withReaderFeatures(mergedReaderFeatures)
289280
.withWriterFeatures(mergedWriterFeatures)
290-
291-
if (mergedProtocol.supportsReaderFeatures || mergedProtocol.supportsWriterFeatures) {
292-
mergedProtocol.withFeatures(mergedImplicitFeatures)
293-
} else {
294-
mergedProtocol
295-
}
281+
.withFeatures(mergedImplicitFeatures)
282+
283+
// The merged protocol is always normalized in order to represent the protocol
284+
// with the weakest possible form. This enables backward compatibility.
285+
// This is preceded by a denormalization step. This allows to fix invalid legacy Protocols.
286+
// For example, (2, 3) is normalized to (1, 3). This is because there is no legacy feature
287+
// in the set with reader version 2 unless the writer version is at least 5.
288+
mergedProtocol.denormalizedNormalized
296289
}
297290

298291
/**
@@ -323,63 +316,77 @@ trait TableFeatureSupport { this: Protocol =>
323316
* the feature exists in the protocol. There is a relevant validation at
324317
* [[AlterTableDropFeatureDeltaCommand]]. We also require targetFeature is removable.
325318
*
326-
* When the feature to remove is the last explicit table feature of the table we also remove the
327-
* TableFeatures feature and downgrade the protocol.
319+
* After removing the feature we normalize the protocol.
328320
*/
329321
def removeFeature(targetFeature: TableFeature): Protocol = {
330322
require(targetFeature.isRemovable)
323+
val currentProtocol = this.denormalized
331324
val newProtocol = targetFeature match {
332325
case f@(_: ReaderWriterFeature | _: LegacyReaderWriterFeature) =>
333-
removeReaderWriterFeature(f)
326+
currentProtocol.removeReaderWriterFeature(f)
334327
case f@(_: WriterFeature | _: LegacyWriterFeature) =>
335-
removeWriterFeature(f)
328+
currentProtocol.removeWriterFeature(f)
336329
case f =>
337330
throw DeltaErrors.dropTableFeatureNonRemovableFeature(f.name)
338331
}
339-
newProtocol.downgradeProtocolVersionsIfNeeded
332+
newProtocol.normalized
340333
}
341334

335+
342336
/**
343-
* If the current protocol does not contain any non-legacy table features and the remaining
344-
* set of legacy table features exactly matches a legacy protocol version, it downgrades the
345-
* protocol to the minimum reader/writer versions required to support the protocol's legacy
346-
* features.
337+
* Protocol normalization is the process of converting a table features protocol to the weakest
338+
* possible form. This primarily refers to converting a table features protocol to a legacy
339+
* protocol. A Table Features protocol can be represented with the legacy representation only
340+
* when the features set of the former exactly matches a legacy protocol.
341+
*
342+
* Normalization can also decrease the reader version of a table features protocol when it is
343+
* higher than necessary.
347344
*
348-
* Note, when a table is initialized with table features (3, 7), by default there are no legacy
349-
* features. After we remove the last native feature we downgrade the protocol to (1, 1).
345+
* For example:
346+
* (1, 7, AppendOnly, Invariants, CheckConstraints) -> (1, 3)
347+
* (3, 7, RowTracking) -> (1, 7, RowTracking)
350348
*/
351-
def downgradeProtocolVersionsIfNeeded: Protocol = {
352-
if (nativeReaderAndWriterFeatures.nonEmpty) {
353-
val (minReaderVersion, minWriterVersion) =
354-
TableFeatureProtocolUtils.minimumRequiredVersions(readerAndWriterFeatures)
355-
// It is guaranteed by the definitions of WriterFeature and ReaderFeature, that we cannot
356-
// end up with invalid protocol versions such as (3, 3). Nevertheless,
357-
// we double check it here.
358-
val newProtocol =
359-
Protocol(minReaderVersion, minWriterVersion).withFeatures(readerAndWriterFeatures)
360-
assert(
361-
newProtocol.supportsWriterFeatures,
362-
s"Downgraded protocol should at least support writer features, but got $newProtocol.")
363-
return newProtocol
364-
}
349+
def normalized: Protocol = {
350+
// Normalization can only be applied to table feature protocols.
351+
if (!supportsWriterFeatures) return this
365352

366353
val (minReaderVersion, minWriterVersion) =
367354
TableFeatureProtocolUtils.minimumRequiredVersions(readerAndWriterFeatures)
368355
val newProtocol = Protocol(minReaderVersion, minWriterVersion)
369356

370-
assert(
371-
!newProtocol.supportsReaderFeatures && !newProtocol.supportsWriterFeatures,
372-
s"Downgraded protocol should not support table features, but got $newProtocol.")
373-
374-
// Ensure the legacy protocol supports features exactly as the current protocol.
375357
if (this.implicitlyAndExplicitlySupportedFeatures ==
376358
newProtocol.implicitlyAndExplicitlySupportedFeatures) {
377359
newProtocol
378360
} else {
379-
this
361+
Protocol(minReaderVersion, TABLE_FEATURES_MIN_WRITER_VERSION)
362+
.withFeatures(readerAndWriterFeatures)
380363
}
381364
}
382365

366+
/**
367+
* Protocol denormalization is the process of converting a legacy protocol to the
368+
* the equivalent table features protocol. This is the inverse of protocol normalization.
369+
* It can be used to allow operations on legacy protocols that yield result which
370+
* cannot be represented anymore by a legacy protocol.
371+
*/
372+
def denormalized: Protocol = {
373+
// Denormalization can only be applied to legacy protocols.
374+
if (supportsWriterFeatures) return this
375+
376+
val (minReaderVersion, _) =
377+
TableFeatureProtocolUtils.minimumRequiredVersions(implicitlySupportedFeatures.toSeq)
378+
379+
Protocol(minReaderVersion, TABLE_FEATURES_MIN_WRITER_VERSION)
380+
.withFeatures(implicitlySupportedFeatures)
381+
}
382+
383+
/**
384+
* Helper method that applies both denormalization and normalization. This can be used to
385+
* normalize invalid legacy protocols such as (2, 3), (1, 5). A legacy protocol is invalid
386+
* when the version numbers are higher than required to support the implied feature set.
387+
*/
388+
def denormalizedNormalized: Protocol = denormalized.normalized
389+
383390
/**
384391
* Check if a `feature` is supported by this protocol. This means either (a) the protocol does
385392
* not support table features and implicitly supports the feature, or (b) the protocol supports

0 commit comments

Comments
 (0)