Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion src/librustc/front/map/mod.rs
Original file line number Diff line number Diff line change
@@ -325,7 +325,17 @@ impl<'ast> Map<'ast> {
return DepNode::Krate,

NotPresent =>
panic!("Walking parents from `{}` led to `NotPresent` at `{}`", id0, id),
// Some nodes, notably struct fields, are not
// present in the map for whatever reason, but
// they *do* have def-ids. So if we encounter an
// empty hole, check for that case.
return self.opt_local_def_id(id)
.map(|def_id| DepNode::Hir(def_id))
.unwrap_or_else(|| {
panic!("Walking parents from `{}` \
led to `NotPresent` at `{}`",
id0, id)
}),
}
}
}
1,250 changes: 0 additions & 1,250 deletions src/librustc_typeck/variance.rs

This file was deleted.

302 changes: 302 additions & 0 deletions src/librustc_typeck/variance/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,302 @@
This file infers the variance of type and lifetime parameters. The
algorithm is taken from Section 4 of the paper "Taming the Wildcards:
Combining Definition- and Use-Site Variance" published in PLDI'11 and
written by Altidor et al., and hereafter referred to as The Paper.

This inference is explicitly designed *not* to consider the uses of
types within code. To determine the variance of type parameters
defined on type `X`, we only consider the definition of the type `X`
and the definitions of any types it references.

We only infer variance for type parameters found on *data types*
like structs and enums. In these cases, there is fairly straightforward
explanation for what variance means. The variance of the type
or lifetime parameters defines whether `T<A>` is a subtype of `T<B>`
(resp. `T<'a>` and `T<'b>`) based on the relationship of `A` and `B`
(resp. `'a` and `'b`).

We do not infer variance for type parameters found on traits, fns,
or impls. Variance on trait parameters can make indeed make sense
(and we used to compute it) but it is actually rather subtle in
meaning and not that useful in practice, so we removed it. See the
addendum for some details. Variances on fn/impl parameters, otoh,
doesn't make sense because these parameters are instantiated and
then forgotten, they don't persist in types or compiled
byproducts.

### The algorithm

The basic idea is quite straightforward. We iterate over the types
defined and, for each use of a type parameter X, accumulate a
constraint indicating that the variance of X must be valid for the
variance of that use site. We then iteratively refine the variance of
X until all constraints are met. There is *always* a sol'n, because at
the limit we can declare all type parameters to be invariant and all
constraints will be satisfied.

As a simple example, consider:

enum Option<A> { Some(A), None }
enum OptionalFn<B> { Some(|B|), None }
enum OptionalMap<C> { Some(|C| -> C), None }

Here, we will generate the constraints:

1. V(A) <= +
2. V(B) <= -
3. V(C) <= +
4. V(C) <= -

These indicate that (1) the variance of A must be at most covariant;
(2) the variance of B must be at most contravariant; and (3, 4) the
variance of C must be at most covariant *and* contravariant. All of these
results are based on a variance lattice defined as follows:

* Top (bivariant)
- +
o Bottom (invariant)

Based on this lattice, the solution V(A)=+, V(B)=-, V(C)=o is the
optimal solution. Note that there is always a naive solution which
just declares all variables to be invariant.

You may be wondering why fixed-point iteration is required. The reason
is that the variance of a use site may itself be a function of the
variance of other type parameters. In full generality, our constraints
take the form:

V(X) <= Term
Term := + | - | * | o | V(X) | Term x Term

Here the notation V(X) indicates the variance of a type/region
parameter `X` with respect to its defining class. `Term x Term`
represents the "variance transform" as defined in the paper:

If the variance of a type variable `X` in type expression `E` is `V2`
and the definition-site variance of the [corresponding] type parameter
of a class `C` is `V1`, then the variance of `X` in the type expression
`C<E>` is `V3 = V1.xform(V2)`.

### Constraints

If I have a struct or enum with where clauses:

struct Foo<T:Bar> { ... }

you might wonder whether the variance of `T` with respect to `Bar`
affects the variance `T` with respect to `Foo`. I claim no. The
reason: assume that `T` is invariant w/r/t `Bar` but covariant w/r/t
`Foo`. And then we have a `Foo<X>` that is upcast to `Foo<Y>`, where
`X <: Y`. However, while `X : Bar`, `Y : Bar` does not hold. In that
case, the upcast will be illegal, but not because of a variance
failure, but rather because the target type `Foo<Y>` is itself just
not well-formed. Basically we get to assume well-formedness of all
types involved before considering variance.

#### Dependency graph management

Because variance works in two phases, if we are not careful, we wind
up with a muddled mess of a dep-graph. Basically, when gathering up
the constraints, things are fairly well-structured, but then we do a
fixed-point iteration and write the results back where they
belong. You can't give this fixed-point iteration a single task
because it reads from (and writes to) the variance of all types in the
crate. In principle, we *could* switch the "current task" in a very
fine-grained way while propagating constraints in the fixed-point
iteration and everything would be automatically tracked, but that
would add some overhead and isn't really necessary anyway.

Instead what we do is to add edges into the dependency graph as we
construct the constraint set: so, if computing the constraints for
node `X` requires loading the inference variables from node `Y`, then
we can add an edge `Y -> X`, since the variance we ultimately infer
for `Y` will affect the variance we ultimately infer for `X`.

At this point, we've basically mirrored the inference graph in the
dependency graph. This means we can just completely ignore the
fixed-point iteration, since it is just shuffling values along this
graph. In other words, if we added the fine-grained switching of tasks
I described earlier, all it would show is that we repeatedly read the
values described by the constraints, but those edges were already
added when building the constraints in the first place.

Here is how this is implemented (at least as of the time of this
writing). The associated `DepNode` for the variance map is (at least
presently) `Signature(DefId)`. This means that, in `constraints.rs`,
when we visit an item to load up its constraints, we set
`Signature(DefId)` as the current task (the "memoization" pattern
described in the `dep-graph` README). Then whenever we find an
embedded type or trait, we add a synthetic read of `Signature(DefId)`,
which covers the variances we will compute for all of its
parameters. This read is synthetic (i.e., we call
`variance_map.read()`) because, in fact, the final variance is not yet
computed -- the read *will* occur (repeatedly) during the fixed-point
iteration phase.

In fact, we don't really *need* this synthetic read. That's because we
do wind up looking up the `TypeScheme` or `TraitDef` for all
references types/traits, and those reads add an edge from
`Signature(DefId)` (that is, they share the same dep node as
variance). However, I've kept the synthetic reads in place anyway,
just for future-proofing (in case we change the dep-nodes in the
future), and because it makes the intention a bit clearer I think.

### Addendum: Variance on traits

As mentioned above, we used to permit variance on traits. This was
computed based on the appearance of trait type parameters in
method signatures and was used to represent the compatibility of
vtables in trait objects (and also "virtual" vtables or dictionary
in trait bounds). One complication was that variance for
associated types is less obvious, since they can be projected out
and put to myriad uses, so it's not clear when it is safe to allow
`X<A>::Bar` to vary (or indeed just what that means). Moreover (as
covered below) all inputs on any trait with an associated type had
to be invariant, limiting the applicability. Finally, the
annotations (`MarkerTrait`, `PhantomFn`) needed to ensure that all
trait type parameters had a variance were confusing and annoying
for little benefit.

Just for historical reference,I am going to preserve some text indicating
how one could interpret variance and trait matching.

#### Variance and object types

Just as with structs and enums, we can decide the subtyping
relationship between two object types `&Trait<A>` and `&Trait<B>`
based on the relationship of `A` and `B`. Note that for object
types we ignore the `Self` type parameter -- it is unknown, and
the nature of dynamic dispatch ensures that we will always call a
function that is expected the appropriate `Self` type. However, we
must be careful with the other type parameters, or else we could
end up calling a function that is expecting one type but provided
another.

To see what I mean, consider a trait like so:

trait ConvertTo<A> {
fn convertTo(&self) -> A;
}

Intuitively, If we had one object `O=&ConvertTo<Object>` and another
`S=&ConvertTo<String>`, then `S <: O` because `String <: Object`
(presuming Java-like "string" and "object" types, my go to examples
for subtyping). The actual algorithm would be to compare the
(explicit) type parameters pairwise respecting their variance: here,
the type parameter A is covariant (it appears only in a return
position), and hence we require that `String <: Object`.

You'll note though that we did not consider the binding for the
(implicit) `Self` type parameter: in fact, it is unknown, so that's
good. The reason we can ignore that parameter is precisely because we
don't need to know its value until a call occurs, and at that time (as
you said) the dynamic nature of virtual dispatch means the code we run
will be correct for whatever value `Self` happens to be bound to for
the particular object whose method we called. `Self` is thus different
from `A`, because the caller requires that `A` be known in order to
know the return type of the method `convertTo()`. (As an aside, we
have rules preventing methods where `Self` appears outside of the
receiver position from being called via an object.)

#### Trait variance and vtable resolution

But traits aren't only used with objects. They're also used when
deciding whether a given impl satisfies a given trait bound. To set the
scene here, imagine I had a function:

fn convertAll<A,T:ConvertTo<A>>(v: &[T]) {
...
}

Now imagine that I have an implementation of `ConvertTo` for `Object`:

impl ConvertTo<i32> for Object { ... }

And I want to call `convertAll` on an array of strings. Suppose
further that for whatever reason I specifically supply the value of
`String` for the type parameter `T`:

let mut vector = vec!["string", ...];
convertAll::<i32, String>(vector);

Is this legal? To put another way, can we apply the `impl` for
`Object` to the type `String`? The answer is yes, but to see why
we have to expand out what will happen:

- `convertAll` will create a pointer to one of the entries in the
vector, which will have type `&String`
- It will then call the impl of `convertTo()` that is intended
for use with objects. This has the type:

fn(self: &Object) -> i32

It is ok to provide a value for `self` of type `&String` because
`&String <: &Object`.

OK, so intuitively we want this to be legal, so let's bring this back
to variance and see whether we are computing the correct result. We
must first figure out how to phrase the question "is an impl for
`Object,i32` usable where an impl for `String,i32` is expected?"

Maybe it's helpful to think of a dictionary-passing implementation of
type classes. In that case, `convertAll()` takes an implicit parameter
representing the impl. In short, we *have* an impl of type:

V_O = ConvertTo<i32> for Object

and the function prototype expects an impl of type:

V_S = ConvertTo<i32> for String

As with any argument, this is legal if the type of the value given
(`V_O`) is a subtype of the type expected (`V_S`). So is `V_O <: V_S`?
The answer will depend on the variance of the various parameters. In
this case, because the `Self` parameter is contravariant and `A` is
covariant, it means that:

V_O <: V_S iff
i32 <: i32
String <: Object

These conditions are satisfied and so we are happy.

#### Variance and associated types

Traits with associated types -- or at minimum projection
expressions -- must be invariant with respect to all of their
inputs. To see why this makes sense, consider what subtyping for a
trait reference means:

<T as Trait> <: <U as Trait>

means that if I know that `T as Trait`, I also know that `U as
Trait`. Moreover, if you think of it as dictionary passing style,
it means that a dictionary for `<T as Trait>` is safe to use where
a dictionary for `<U as Trait>` is expected.

The problem is that when you can project types out from `<T as
Trait>`, the relationship to types projected out of `<U as Trait>`
is completely unknown unless `T==U` (see #21726 for more
details). Making `Trait` invariant ensures that this is true.

Another related reason is that if we didn't make traits with
associated types invariant, then projection is no longer a
function with a single result. Consider:

```
trait Identity { type Out; fn foo(&self); }
impl<T> Identity for T { type Out = T; ... }
```

Now if I have `<&'static () as Identity>::Out`, this can be
validly derived as `&'a ()` for any `'a`:

<&'a () as Identity> <: <&'static () as Identity>
if &'static () < : &'a () -- Identity is contravariant in Self
if 'static : 'a -- Subtyping rules for relations

This change otoh means that `<'static () as Identity>::Out` is
always `&'static ()` (which might then be upcast to `'a ()`,
separately). This was helpful in solving #21750.


553 changes: 553 additions & 0 deletions src/librustc_typeck/variance/constraints.rs

Large diffs are not rendered by default.

37 changes: 37 additions & 0 deletions src/librustc_typeck/variance/mod.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

//! Module for inferring the variance of type and lifetime
//! parameters. See README.md for details.
use arena;
use middle::ty;

/// Defines the `TermsContext` basically houses an arena where we can
/// allocate terms.
mod terms;

/// Code to gather up constraints.
mod constraints;

/// Code to solve constraints and write out the results.
mod solve;

/// Code for transforming variances.
mod xform;

pub fn infer_variance(tcx: &ty::ctxt) {
let mut arena = arena::TypedArena::new();
let terms_cx = terms::determine_parameters_to_be_inferred(tcx, &mut arena);
let constraints_cx = constraints::add_constraints_from_crate(terms_cx);
solve::solve_constraints(constraints_cx);
tcx.variance_computed.set(true);
}

167 changes: 167 additions & 0 deletions src/librustc_typeck/variance/solve.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,167 @@
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

//! Constraint solving
//!
//! The final phase iterates over the constraints, refining the variance
//! for each inferred until a fixed point is reached. This will be the
//! optimal solution to the constraints. The final variance for each
//! inferred is then written into the `variance_map` in the tcx.
use middle::subst::VecPerParamSpace;
use middle::ty;
use std::rc::Rc;

use super::constraints::*;
use super::terms::*;
use super::terms::VarianceTerm::*;
use super::terms::ParamKind::*;
use super::xform::*;

struct SolveContext<'a, 'tcx: 'a> {
terms_cx: TermsContext<'a, 'tcx>,
constraints: Vec<Constraint<'a>> ,

// Maps from an InferredIndex to the inferred value for that variable.
solutions: Vec<ty::Variance>
}

pub fn solve_constraints(constraints_cx: ConstraintContext) {
let ConstraintContext { terms_cx, constraints, .. } = constraints_cx;

let solutions =
terms_cx.inferred_infos.iter()
.map(|ii| ii.initial_variance)
.collect();

let mut solutions_cx = SolveContext {
terms_cx: terms_cx,
constraints: constraints,
solutions: solutions
};
solutions_cx.solve();
solutions_cx.write();
}

impl<'a, 'tcx> SolveContext<'a, 'tcx> {
fn solve(&mut self) {
// Propagate constraints until a fixed point is reached. Note
// that the maximum number of iterations is 2C where C is the
// number of constraints (each variable can change values at most
// twice). Since number of constraints is linear in size of the
// input, so is the inference process.
let mut changed = true;
while changed {
changed = false;

for constraint in &self.constraints {
let Constraint { inferred, variance: term } = *constraint;
let InferredIndex(inferred) = inferred;
let variance = self.evaluate(term);
let old_value = self.solutions[inferred];
let new_value = glb(variance, old_value);
if old_value != new_value {
debug!("Updating inferred {} (node {}) \
from {:?} to {:?} due to {:?}",
inferred,
self.terms_cx
.inferred_infos[inferred]
.param_id,
old_value,
new_value,
term);

self.solutions[inferred] = new_value;
changed = true;
}
}
}
}

fn write(&self) {
// Collect all the variances for a particular item and stick
// them into the variance map. We rely on the fact that we
// generate all the inferreds for a particular item
// consecutively (that is, we collect solutions for an item
// until we see a new item id, and we assume (1) the solutions
// are in the same order as the type parameters were declared
// and (2) all solutions or a given item appear before a new
// item id).

let tcx = self.terms_cx.tcx;

// Ignore the writes here because the relevant edges were
// already accounted for in `constraints.rs`. See the section
// on dependency graph management in README.md for more
// information.
let _ignore = tcx.dep_graph.in_ignore();

let solutions = &self.solutions;
let inferred_infos = &self.terms_cx.inferred_infos;
let mut index = 0;
let num_inferred = self.terms_cx.num_inferred();
while index < num_inferred {
let item_id = inferred_infos[index].item_id;
let mut types = VecPerParamSpace::empty();
let mut regions = VecPerParamSpace::empty();

while index < num_inferred && inferred_infos[index].item_id == item_id {
let info = &inferred_infos[index];
let variance = solutions[index];
debug!("Index {} Info {} / {:?} / {:?} Variance {:?}",
index, info.index, info.kind, info.space, variance);
match info.kind {
TypeParam => { types.push(info.space, variance); }
RegionParam => { regions.push(info.space, variance); }
}

index += 1;
}

let item_variances = ty::ItemVariances {
types: types,
regions: regions
};
debug!("item_id={} item_variances={:?}",
item_id,
item_variances);

let item_def_id = tcx.map.local_def_id(item_id);

// For unit testing: check for a special "rustc_variance"
// attribute and report an error with various results if found.
if tcx.has_attr(item_def_id, "rustc_variance") {
span_err!(tcx.sess, tcx.map.span(item_id), E0208, "{:?}", item_variances);
}

let newly_added = tcx.item_variance_map.borrow_mut()
.insert(item_def_id, Rc::new(item_variances)).is_none();
assert!(newly_added);
}
}

fn evaluate(&self, term: VarianceTermPtr<'a>) -> ty::Variance {
match *term {
ConstantTerm(v) => {
v
}

TransformTerm(t1, t2) => {
let v1 = self.evaluate(t1);
let v2 = self.evaluate(t2);
v1.xform(v2)
}

InferredTerm(InferredIndex(index)) => {
self.solutions[index]
}
}
}
}
287 changes: 287 additions & 0 deletions src/librustc_typeck/variance/terms.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,287 @@
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

// Representing terms
//
// Terms are structured as a straightforward tree. Rather than rely on
// GC, we allocate terms out of a bounded arena (the lifetime of this
// arena is the lifetime 'a that is threaded around).
//
// We assign a unique index to each type/region parameter whose variance
// is to be inferred. We refer to such variables as "inferreds". An
// `InferredIndex` is a newtype'd int representing the index of such
// a variable.

use arena::TypedArena;
use dep_graph::DepTrackingMapConfig;
use middle::subst::{ParamSpace, FnSpace, TypeSpace, SelfSpace, VecPerParamSpace};
use middle::ty;
use middle::ty::maps::ItemVariances;
use std::fmt;
use std::rc::Rc;
use syntax::ast;
use rustc_front::hir;
use rustc_front::intravisit::Visitor;
use util::nodemap::NodeMap;

use self::VarianceTerm::*;
use self::ParamKind::*;

pub type VarianceTermPtr<'a> = &'a VarianceTerm<'a>;

#[derive(Copy, Clone, Debug)]
pub struct InferredIndex(pub usize);

#[derive(Copy, Clone)]
pub enum VarianceTerm<'a> {
ConstantTerm(ty::Variance),
TransformTerm(VarianceTermPtr<'a>, VarianceTermPtr<'a>),
InferredTerm(InferredIndex),
}

impl<'a> fmt::Debug for VarianceTerm<'a> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match *self {
ConstantTerm(c1) => write!(f, "{:?}", c1),
TransformTerm(v1, v2) => write!(f, "({:?} \u{00D7} {:?})", v1, v2),
InferredTerm(id) => write!(f, "[{}]", { let InferredIndex(i) = id; i })
}
}
}

// The first pass over the crate simply builds up the set of inferreds.

pub struct TermsContext<'a, 'tcx: 'a> {
pub tcx: &'a ty::ctxt<'tcx>,
pub arena: &'a TypedArena<VarianceTerm<'a>>,

pub empty_variances: Rc<ty::ItemVariances>,

// For marker types, UnsafeCell, and other lang items where
// variance is hardcoded, records the item-id and the hardcoded
// variance.
pub lang_items: Vec<(ast::NodeId, Vec<ty::Variance>)>,

// Maps from the node id of a type/generic parameter to the
// corresponding inferred index.
pub inferred_map: NodeMap<InferredIndex>,

// Maps from an InferredIndex to the info for that variable.
pub inferred_infos: Vec<InferredInfo<'a>> ,
}

#[derive(Copy, Clone, Debug, PartialEq)]
pub enum ParamKind {
TypeParam,
RegionParam,
}

pub struct InferredInfo<'a> {
pub item_id: ast::NodeId,
pub kind: ParamKind,
pub space: ParamSpace,
pub index: usize,
pub param_id: ast::NodeId,
pub term: VarianceTermPtr<'a>,

// Initial value to use for this parameter when inferring
// variance. For most parameters, this is Bivariant. But for lang
// items and input type parameters on traits, it is different.
pub initial_variance: ty::Variance,
}

pub fn determine_parameters_to_be_inferred<'a, 'tcx>(
tcx: &'a ty::ctxt<'tcx>,
arena: &'a mut TypedArena<VarianceTerm<'a>>)
-> TermsContext<'a, 'tcx>
{
let mut terms_cx = TermsContext {
tcx: tcx,
arena: arena,
inferred_map: NodeMap(),
inferred_infos: Vec::new(),

lang_items: lang_items(tcx),

// cache and share the variance struct used for items with
// no type/region parameters
empty_variances: Rc::new(ty::ItemVariances {
types: VecPerParamSpace::empty(),
regions: VecPerParamSpace::empty()
})
};

// See README.md for a discussion on dep-graph management.
tcx.visit_all_items_in_krate(|def_id| ItemVariances::to_dep_node(&def_id),
&mut terms_cx);

terms_cx
}

fn lang_items(tcx: &ty::ctxt) -> Vec<(ast::NodeId,Vec<ty::Variance>)> {
let all = vec![
(tcx.lang_items.phantom_data(), vec![ty::Covariant]),
(tcx.lang_items.unsafe_cell_type(), vec![ty::Invariant]),

// Deprecated:
(tcx.lang_items.covariant_type(), vec![ty::Covariant]),
(tcx.lang_items.contravariant_type(), vec![ty::Contravariant]),
(tcx.lang_items.invariant_type(), vec![ty::Invariant]),
(tcx.lang_items.covariant_lifetime(), vec![ty::Covariant]),
(tcx.lang_items.contravariant_lifetime(), vec![ty::Contravariant]),
(tcx.lang_items.invariant_lifetime(), vec![ty::Invariant]),

];

all.into_iter() // iterating over (Option<DefId>, Variance)
.filter(|&(ref d,_)| d.is_some())
.map(|(d, v)| (d.unwrap(), v)) // (DefId, Variance)
.filter_map(|(d, v)| tcx.map.as_local_node_id(d).map(|n| (n, v))) // (NodeId, Variance)
.collect()
}

impl<'a, 'tcx> TermsContext<'a, 'tcx> {
fn add_inferreds_for_item(&mut self,
item_id: ast::NodeId,
has_self: bool,
generics: &hir::Generics)
{
/*!
* Add "inferreds" for the generic parameters declared on this
* item. This has a lot of annoying parameters because we are
* trying to drive this from the AST, rather than the
* ty::Generics, so that we can get span info -- but this
* means we must accommodate syntactic distinctions.
*/

// NB: In the code below for writing the results back into the
// tcx, we rely on the fact that all inferreds for a particular
// item are assigned continuous indices.

let inferreds_on_entry = self.num_inferred();

if has_self {
self.add_inferred(item_id, TypeParam, SelfSpace, 0, item_id);
}

for (i, p) in generics.lifetimes.iter().enumerate() {
let id = p.lifetime.id;
self.add_inferred(item_id, RegionParam, TypeSpace, i, id);
}

for (i, p) in generics.ty_params.iter().enumerate() {
self.add_inferred(item_id, TypeParam, TypeSpace, i, p.id);
}

// If this item has no type or lifetime parameters,
// then there are no variances to infer, so just
// insert an empty entry into the variance map.
// Arguably we could just leave the map empty in this
// case but it seems cleaner to be able to distinguish
// "invalid item id" from "item id with no
// parameters".
if self.num_inferred() == inferreds_on_entry {
let item_def_id = self.tcx.map.local_def_id(item_id);
let newly_added =
self.tcx.item_variance_map.borrow_mut().insert(
item_def_id,
self.empty_variances.clone()).is_none();
assert!(newly_added);
}
}

fn add_inferred(&mut self,
item_id: ast::NodeId,
kind: ParamKind,
space: ParamSpace,
index: usize,
param_id: ast::NodeId) {
let inf_index = InferredIndex(self.inferred_infos.len());
let term = self.arena.alloc(InferredTerm(inf_index));
let initial_variance = self.pick_initial_variance(item_id, space, index);
self.inferred_infos.push(InferredInfo { item_id: item_id,
kind: kind,
space: space,
index: index,
param_id: param_id,
term: term,
initial_variance: initial_variance });
let newly_added = self.inferred_map.insert(param_id, inf_index).is_none();
assert!(newly_added);

debug!("add_inferred(item_path={}, \
item_id={}, \
kind={:?}, \
space={:?}, \
index={}, \
param_id={}, \
inf_index={:?}, \
initial_variance={:?})",
self.tcx.item_path_str(self.tcx.map.local_def_id(item_id)),
item_id, kind, space, index, param_id, inf_index,
initial_variance);
}

fn pick_initial_variance(&self,
item_id: ast::NodeId,
space: ParamSpace,
index: usize)
-> ty::Variance
{
match space {
SelfSpace | FnSpace => {
ty::Bivariant
}

TypeSpace => {
match self.lang_items.iter().find(|&&(n, _)| n == item_id) {
Some(&(_, ref variances)) => variances[index],
None => ty::Bivariant
}
}
}
}

pub fn num_inferred(&self) -> usize {
self.inferred_infos.len()
}
}

impl<'a, 'tcx, 'v> Visitor<'v> for TermsContext<'a, 'tcx> {
fn visit_item(&mut self, item: &hir::Item) {
debug!("add_inferreds for item {}", self.tcx.map.node_to_string(item.id));

match item.node {
hir::ItemEnum(_, ref generics) |
hir::ItemStruct(_, ref generics) => {
self.add_inferreds_for_item(item.id, false, generics);
}
hir::ItemTrait(_, ref generics, _, _) => {
// Note: all inputs for traits are ultimately
// constrained to be invariant. See `visit_item` in
// the impl for `ConstraintContext` in `constraints.rs`.
self.add_inferreds_for_item(item.id, true, generics);
}

hir::ItemExternCrate(_) |
hir::ItemUse(_) |
hir::ItemDefaultImpl(..) |
hir::ItemImpl(..) |
hir::ItemStatic(..) |
hir::ItemConst(..) |
hir::ItemFn(..) |
hir::ItemMod(..) |
hir::ItemForeignMod(..) |
hir::ItemTy(..) => {
}
}
}
}

61 changes: 61 additions & 0 deletions src/librustc_typeck/variance/xform.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
// Copyright 2013 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.

use middle::ty;

pub trait Xform {
fn xform(self, v: Self) -> Self;
}

impl Xform for ty::Variance {
fn xform(self, v: ty::Variance) -> ty::Variance {
// "Variance transformation", Figure 1 of The Paper
match (self, v) {
// Figure 1, column 1.
(ty::Covariant, ty::Covariant) => ty::Covariant,
(ty::Covariant, ty::Contravariant) => ty::Contravariant,
(ty::Covariant, ty::Invariant) => ty::Invariant,
(ty::Covariant, ty::Bivariant) => ty::Bivariant,

// Figure 1, column 2.
(ty::Contravariant, ty::Covariant) => ty::Contravariant,
(ty::Contravariant, ty::Contravariant) => ty::Covariant,
(ty::Contravariant, ty::Invariant) => ty::Invariant,
(ty::Contravariant, ty::Bivariant) => ty::Bivariant,

// Figure 1, column 3.
(ty::Invariant, _) => ty::Invariant,

// Figure 1, column 4.
(ty::Bivariant, _) => ty::Bivariant,
}
}
}

pub fn glb(v1: ty::Variance, v2: ty::Variance) -> ty::Variance {
// Greatest lower bound of the variance lattice as
// defined in The Paper:
//
// *
// - +
// o
match (v1, v2) {
(ty::Invariant, _) | (_, ty::Invariant) => ty::Invariant,

(ty::Covariant, ty::Contravariant) => ty::Invariant,
(ty::Contravariant, ty::Covariant) => ty::Invariant,

(ty::Covariant, ty::Covariant) => ty::Covariant,

(ty::Contravariant, ty::Contravariant) => ty::Contravariant,

(x, ty::Bivariant) | (ty::Bivariant, x) => x,
}
}
14 changes: 4 additions & 10 deletions src/test/compile-fail/dep-graph-struct-signature.rs
Original file line number Diff line number Diff line change
@@ -74,23 +74,17 @@ mod signatures {
fn indirect(x: WillChanges) { }
}

// these are invalid dependencies, though sometimes we create edges
// anyway.
mod invalid_signatures {
use WontChange;

// FIXME due to the variance pass having overly conservative edges,
// we incorrectly think changes are needed here
#[rustc_then_this_would_need(ItemSignature)] //~ ERROR OK
#[rustc_then_this_would_need(CollectItem)] //~ ERROR OK
#[rustc_then_this_would_need(ItemSignature)] //~ ERROR no path
#[rustc_then_this_would_need(CollectItem)] //~ ERROR no path
trait A {
fn do_something_else_twice(x: WontChange);
}

// FIXME due to the variance pass having overly conservative edges,
// we incorrectly think changes are needed here
#[rustc_then_this_would_need(ItemSignature)] //~ ERROR OK
#[rustc_then_this_would_need(CollectItem)] //~ ERROR OK
#[rustc_then_this_would_need(ItemSignature)] //~ ERROR no path
#[rustc_then_this_would_need(CollectItem)] //~ ERROR no path
fn b(x: WontChange) { }

#[rustc_then_this_would_need(ItemSignature)] //~ ERROR no path from `WillChange`