diff --git a/src/librustc_mir/hair/pattern/_match.rs b/src/librustc_mir/hair/pattern/_match.rs index 3ea5805287724..79e040925085b 100644 --- a/src/librustc_mir/hair/pattern/_match.rs +++ b/src/librustc_mir/hair/pattern/_match.rs @@ -1,178 +1,298 @@ +/// Note: most tests relevant to this file can be found (at the time of writing) +/// in src/tests/ui/pattern/usefulness. Also look out for rfc2008 (feature +/// non_exhaustive) tests. +/// +/// # Introduction +/// /// This file includes the logic for exhaustiveness and usefulness checking for /// pattern-matching. Specifically, given a list of patterns for a type, we can /// tell whether: /// (a) the patterns cover every possible constructor for the type [exhaustiveness] /// (b) each pattern is necessary [usefulness] /// -/// The algorithm implemented here is a modified version of the one described in: +/// The algorithm implemented here is based on the one described in: /// http://moscova.inria.fr/~maranget/papers/warn/index.html -/// However, to save future implementors from reading the original paper, we -/// summarise the algorithm here to hopefully save time and be a little clearer -/// (without being so rigorous). +/// However, various modifications have been made to it so we keep it only as reference +/// and will describe the extended algorithm here (without being so rigorous). /// /// The core of the algorithm revolves about a "usefulness" check. In particular, we -/// are trying to compute a predicate `U(P, p_{m + 1})` where `P` is a list of patterns -/// of length `m` for a compound (product) type with `n` components (we refer to this as -/// a matrix). `U(P, p_{m + 1})` represents whether, given an existing list of patterns -/// `p_1 ..= p_m`, adding a new pattern will be "useful" (that is, cover previously- +/// are trying to compute a predicate `U(P, q)` where `P` is a list of patterns. +/// `U(P, q)` represents whether, given an existing list of patterns +/// `P_1 ..= P_m`, adding a new pattern `q` will be "useful" (that is, cover previously- /// uncovered values of the type). /// /// If we have this predicate, then we can easily compute both exhaustiveness of an /// entire set of patterns and the individual usefulness of each one. /// (a) the set of patterns is exhaustive iff `U(P, _)` is false (i.e., adding a wildcard /// match doesn't increase the number of values we're matching) -/// (b) a pattern `p_i` is not useful if `U(P[0..=(i-1), p_i)` is false (i.e., adding a -/// pattern to those that have come before it doesn't increase the number of values -/// we're matching). +/// (b) a pattern `P_i` is not useful (i.e. unreachable) if `U(P[0..=(i-1), P_i)` is +/// false (i.e., adding a pattern to those that have come before it doesn't match any value +/// that wasn't matched previously). +/// +/// +/// # Pattern-stacks and matrices +/// +/// The basic datastructure that we will make use of in the algorithm is a list of patterns that +/// the paper calls "pattern-vector" and that we call "pattern-stack". The idea is that we +/// start with a single pattern of interest, +/// and repeatedly unpack the top constructor to reveal its arguments. We keep the yet-untreated +/// arguments in the tail of the stack. +/// +/// For example, say we start with the pattern `Foo(Bar(1, 2), Some(true), false)`. The +/// pattern-stack might then evolve as follows: +/// [Foo(Bar(1, 2), Some(_), false)] // Initially we have a single pattern in the stack +/// [Bar(1, 2), Some(_), false] // After unpacking the `Foo` constructor +/// [1, 2, Some(_), false] // After unpacking the `Bar` constructor +/// [2, Some(_), false] // After unpacking the `1` constructor +/// // etc. +/// +/// We call the operation of popping the constructor on top of the stack "specialization", and we +/// write it `S(c, p)`, where `p` is a pattern-stack and `c` a specific constructor (like `Some` +/// or `None`). This operation returns zero or more altered pattern-stacks, as follows. +/// We look at the pattern `p_1` on top of the stack, and we have four cases: +/// 1. `p_1 = c(r_1, .., r_a)`, i.e. the top of the stack has constructor `c`. We push +/// onto the stack the arguments of this constructor, and return the result: +/// r_1, .., r_a, p_2, .., p_n +/// 2. `p_1 = c'(r_1, .., r_a')` where `c ≠ c'`. We discard the current stack and return +/// nothing. +/// 3. `p_1 = _`. We push onto the stack as many wildcards as the constructor `c` +/// has arguments (its arity), and return the resulting stack: +/// _, .., _, p_2, .., p_n +/// 4. `p_1 = r_1 | r_2`. We expand the OR-pattern and then recurse on each resulting stack: +/// S(c, (r_1, p_2, .., p_n)) +/// S(c, (r_2, p_2, .., p_n)) +/// +/// Note that when the required constructor does not match the constructor on top of the stack, we +/// return nothing. Thus specialization filters pattern-stacks by the constructor on top of them. +/// +/// We call a list of pattern-stacks a "matrix", because in the run of the algorithm they will +/// keep a rectangular shape. `S` operation extends straightforwardly to matrices by +/// working row-by-row using flat_map. +/// +/// +/// # Abstract algorithm +/// +/// The algorithm itself is a function `U`, that takes as arguments a matrix `M` and a new pattern +/// `p`, both with the same number `n` of columns. +/// The algorithm is inductive (on the number of columns: i.e., components of pattern-stacks). +/// The algorithm is realised in the `is_useful` function. +/// +/// Base case. (`n = 0`, i.e., an empty tuple pattern) +/// - If `M` already contains an empty pattern (i.e., if the number of patterns `m > 0`), +/// then `U(M, p)` is false. +/// - Otherwise, `M` must be empty, so `U(M, p)` is true. +/// +/// Inductive step. (`n > 0`) +/// We look at `p_1`, the head of the pattern-stack `p`. +/// +/// We first generate the list of constructors that are covered by a pattern `pat`. We name +/// this operation `pat_constructors`. +/// - If `pat == c(r_1, .., r_a)`, i.e. we have a constructor pattern. Then we just +/// return `c`: +/// `pat_constructors(pat) = [c]` +/// +/// - If `pat == _`, then we return the list of all possible constructors for the +/// relevant type: +/// `pat_constructors(pat) = all_constructors(pat.ty)` +/// +/// - If `pat == r_1 | r_2`, then we return the constructors for either branch of the +/// OR-pattern: +/// `pat_constructors(pat) = pat_constructors(r_1) + pat_constructors(r_2)` +/// +/// Then for each constructor `c` in `pat_constructors(p_1)`, we want to check whether a value +/// that starts with this constructor may show that `p` is useful, i.e. may match `p` but not +/// be matched by the matrix above. +/// For that, we only care about those rows of `M` whose first component covers the +/// constructor `c`; and for those rows that do, we want to unpack the arguments to `c` to check +/// further that `p` matches additional values. +/// This is where specialization comes in: this check amounts to computing `U(S(c, M), S(c, +/// p))`. More details can be found in the paper. /// -/// For example, say we have the following: +/// Thus we get: `U(M, p) := ∃(c ϵ pat_constructors(p_1)) U(S(c, M), S(c, p))` +/// +/// Note that for c ϵ pat_constructors(p_1), `S(c, P)` always returns exactly one element, so +/// the formula above makes sense. +/// +/// This algorithm however has a lot of practical issues. Most importantly, it may not terminate +/// for some types with infinitely many inhabitants, because when it encounters a wildcard it will +/// try all the values of the type. And it would be stupidly slow anyways for types with a lot of +/// constructors, like `u64` of `&[bool]`. We therefore present a modified version after the +/// example. +/// +/// +/// # Example run of the algorithm +/// +/// Assume we have the following match. We want to know whether it is exhaustive, i.e. whether +/// an additional `_` pattern would be useful (would be reachable). /// ``` -/// // x: (Option, Result<()>) /// match x { -/// (Some(true), _) => {} -/// (None, Err(())) => {} -/// (None, Err(_)) => {} +/// Some(true) => {} +/// None => {} /// } /// ``` -/// Here, the matrix `P` is 3 x 2 (rows x columns). -/// [ -/// [Some(true), _], -/// [None, Err(())], -/// [None, Err(_)], -/// ] -/// We can tell it's not exhaustive, because `U(P, _)` is true (we're not covering -/// `[Some(false), _]`, for instance). In addition, row 3 is not useful, because -/// all the values it covers are already covered by row 2. /// -/// To compute `U`, we must have two other concepts. -/// 1. `S(c, P)` is a "specialized matrix", where `c` is a constructor (like `Some` or -/// `None`). You can think of it as filtering `P` to just the rows whose *first* pattern -/// can cover `c` (and expanding OR-patterns into distinct patterns), and then expanding -/// the constructor into all of its components. -/// The specialization of a row vector is computed by `specialize`. +/// We start with the following `M` and `p`: +/// M = [ [Some(true)], +/// [None] ] +/// p = [_] +/// `pat_constructors(p)` returns `[None, Some]` /// -/// It is computed as follows. For each row `p_i` of P, we have four cases: -/// 1.1. `p_(i,1) = c(r_1, .., r_a)`. Then `S(c, P)` has a corresponding row: -/// r_1, .., r_a, p_(i,2), .., p_(i,n) -/// 1.2. `p_(i,1) = c'(r_1, .., r_a')` where `c ≠ c'`. Then `S(c, P)` has no -/// corresponding row. -/// 1.3. `p_(i,1) = _`. Then `S(c, P)` has a corresponding row: -/// _, .., _, p_(i,2), .., p_(i,n) -/// 1.4. `p_(i,1) = r_1 | r_2`. Then `S(c, P)` has corresponding rows inlined from: -/// S(c, (r_1, p_(i,2), .., p_(i,n))) -/// S(c, (r_2, p_(i,2), .., p_(i,n))) +/// We specialize on the `None` constructor first: +/// S(None, M) = [ [] ] +/// S(None, p) = [] +/// We hit the base case n = 0: since bool is inhabited, `U(S(None, M), S(None, p)) = false`. /// -/// 2. `D(P)` is a "default matrix". This is used when we know there are missing -/// constructor cases, but there might be existing wildcard patterns, so to check the -/// usefulness of the matrix, we have to check all its *other* components. -/// The default matrix is computed inline in `is_useful`. +/// We specialize on the `Some` constructor second: +/// S(Some, M) = [ [true] ] +/// S(Some, p) = [_] +/// Let M' := S(Some, M) and p' := S(Some, p). /// -/// It is computed as follows. For each row `p_i` of P, we have three cases: -/// 1.1. `p_(i,1) = c(r_1, .., r_a)`. Then `D(P)` has no corresponding row. -/// 1.2. `p_(i,1) = _`. Then `D(P)` has a corresponding row: -/// p_(i,2), .., p_(i,n) -/// 1.3. `p_(i,1) = r_1 | r_2`. Then `D(P)` has corresponding rows inlined from: -/// D((r_1, p_(i,2), .., p_(i,n))) -/// D((r_2, p_(i,2), .., p_(i,n))) +/// `pat_constructors(p')` returns `[true, false]` +/// S(true, M') = [ [] ] +/// S(true, p') = [] +/// So `U(S(true, M'), S(true, p')) = false` /// -/// Note that the OR-patterns are not always used directly in Rust, but are used to derive -/// the exhaustive integer matching rules, so they're written here for posterity. +/// S(false, M') = [] +/// S(false, p') = [] +/// So `U(S(false, M'), S(false, p')) = true` /// -/// The algorithm for computing `U` -/// ------------------------------- -/// The algorithm is inductive (on the number of columns: i.e., components of tuple patterns). -/// That means we're going to check the components from left-to-right, so the algorithm -/// operates principally on the first component of the matrix and new pattern `p_{m + 1}`. -/// This algorithm is realised in the `is_useful` function. +/// Therefore `U(M, p) = true`, indeed by following the steps taken we can recover that +/// the pattern `Some(false)` was not covered by the initial match. /// -/// Base case. (`n = 0`, i.e., an empty tuple pattern) -/// - If `P` already contains an empty pattern (i.e., if the number of patterns `m > 0`), -/// then `U(P, p_{m + 1})` is false. -/// - Otherwise, `P` must be empty, so `U(P, p_{m + 1})` is true. /// -/// Inductive step. (`n > 0`, i.e., whether there's at least one column -/// [which may then be expanded into further columns later]) -/// We're going to match on the new pattern, `p_{m + 1}`. -/// - If `p_{m + 1} == c(r_1, .., r_a)`, then we have a constructor pattern. -/// Thus, the usefulness of `p_{m + 1}` can be reduced to whether it is useful when -/// we ignore all the patterns in `P` that involve other constructors. This is where -/// `S(c, P)` comes in: -/// `U(P, p_{m + 1}) := U(S(c, P), S(c, p_{m + 1}))` -/// This special case is handled in `is_useful_specialized`. -/// - If `p_{m + 1} == _`, then we have two more cases: -/// + All the constructors of the first component of the type exist within -/// all the rows (after having expanded OR-patterns). In this case: -/// `U(P, p_{m + 1}) := ∨(k ϵ constructors) U(S(k, P), S(k, p_{m + 1}))` -/// I.e., the pattern `p_{m + 1}` is only useful when all the constructors are -/// present *if* its later components are useful for the respective constructors -/// covered by `p_{m + 1}` (usually a single constructor, but all in the case of `_`). -/// + Some constructors are not present in the existing rows (after having expanded -/// OR-patterns). However, there might be wildcard patterns (`_`) present. Thus, we -/// are only really concerned with the other patterns leading with wildcards. This is -/// where `D` comes in: -/// `U(P, p_{m + 1}) := U(D(P), p_({m + 1},2), .., p_({m + 1},n))` -/// - If `p_{m + 1} == r_1 | r_2`, then the usefulness depends on each separately: -/// `U(P, p_{m + 1}) := U(P, (r_1, p_({m + 1},2), .., p_({m + 1},n))) -/// || U(P, (r_2, p_({m + 1},2), .., p_({m + 1},n)))` +/// # Concrete algorithm /// -/// Modifications to the algorithm -/// ------------------------------ -/// The algorithm in the paper doesn't cover some of the special cases that arise in Rust, for -/// example uninhabited types and variable-length slice patterns. These are drawn attention to -/// throughout the code below. I'll make a quick note here about how exhaustive integer matching -/// is accounted for, though. +/// To make the algorithm tractable, we introduce the notion of meta-constructors. A +/// meta-constructor stands for a particular group of constructors. The typical example +/// is the wildcard `_`, which stands for all the constructors of a given type. /// -/// Exhaustive integer matching -/// --------------------------- -/// An integer type can be thought of as a (huge) sum type: 1 | 2 | 3 | ... -/// So to support exhaustive integer matching, we can make use of the logic in the paper for -/// OR-patterns. However, we obviously can't just treat ranges x..=y as individual sums, because -/// they are likely gigantic. So we instead treat ranges as constructors of the integers. This means -/// that we have a constructor *of* constructors (the integers themselves). We then need to work -/// through all the inductive step rules above, deriving how the ranges would be treated as -/// OR-patterns, and making sure that they're treated in the same way even when they're ranges. -/// There are really only four special cases here: -/// - When we match on a constructor that's actually a range, we have to treat it as if we would -/// an OR-pattern. -/// + It turns out that we can simply extend the case for single-value patterns in -/// `specialize` to either be *equal* to a value constructor, or *contained within* a range -/// constructor. -/// + When the pattern itself is a range, you just want to tell whether any of the values in -/// the pattern range coincide with values in the constructor range, which is precisely -/// intersection. -/// Since when encountering a range pattern for a value constructor, we also use inclusion, it -/// means that whenever the constructor is a value/range and the pattern is also a value/range, -/// we can simply use intersection to test usefulness. -/// - When we're testing for usefulness of a pattern and the pattern's first component is a -/// wildcard. -/// + If all the constructors appear in the matrix, we have a slight complication. By default, -/// the behaviour (i.e., a disjunction over specialised matrices for each constructor) is -/// invalid, because we want a disjunction over every *integer* in each range, not just a -/// disjunction over every range. This is a bit more tricky to deal with: essentially we need -/// to form equivalence classes of subranges of the constructor range for which the behaviour -/// of the matrix `P` and new pattern `p_{m + 1}` are the same. This is described in more -/// detail in `split_grouped_constructors`. -/// + If some constructors are missing from the matrix, it turns out we don't need to do -/// anything special (because we know none of the integers are actually wildcards: i.e., we -/// can't span wildcards using ranges). - +/// In practice, the meta-constructors we make use of in this file are the following: +/// - any normal constructor is also a meta-constructor with exactly one member; +/// - the wildcard `_`, that captures all constructors of a given type; +/// - the constant range `x..y` that captures a range of values for types that support +/// it, like integers; +/// - the variable-length slice `[x, y, .., z]`, that captures all slice constructors +/// from a given length onwards; +/// - the "missing constructors" meta-constructor, that captures a provided arbitrary group +/// of constructors. +/// +/// We first redefine `pat_constructors` to potentially return a meta-constructor when relevant +/// for a pattern. +/// +/// We then add a step to the algorithm: a function `split_meta_constructor(mc, M)` that returns +/// a list of meta-constructors, with the following properties: +/// - the set of base constructors covered by the output must be the same as covered by `mc`; +/// - for each meta-constructor `k` in the output, all the `c ϵ k` behave the same relative +/// to `M`. More precisely, we want that for any two `c1` and `c2` in `k`, +/// `U(S(c1, M), S(c1, p))` iff `U(S(c2, M), S(c2, p))`; +/// - if the first column of `M` is only wildcards, then the function returns at most +/// `[mc]` on its own; +/// - if the relevant type is uninhabited, the function returns nothing. +/// Any function that has those properties ensures correctness of the algorithm. We will of course +/// try to pick a function that also ensures good performance. +/// The idea is that we still need to try different constructors, but we try to keep them grouped +/// together when possible to avoid doing redundant work. +/// +/// Here is roughly how splitting works for us: +/// - for wildcards, there are two cases: +/// - if all the possible constructors of the relevant type exist in the first column +/// of `M`, then we return the list of all those constructors, like we did before; +/// - if however some constructors are missing, then it turns out that considering +/// those missing constructors is enough. We return a "missing constructors" meta- +/// contructor that carries the missing constructors in question. +/// (Note the similarity with the algorithm from the paper. It is not a coincidence) +/// - for ranges, we split the range into a disjoint set of subranges, see the code for details; +/// - for slices, we split the slice into a number of fixed-length slices and one longer +/// variable-length slice, again see code; +/// +/// Thus we get the new inductive step (i.e. when `n > 0`): +/// `U(M, p) := +/// ∃(mc ϵ pat_constructors(p_1)) +/// ∃(mc' ϵ split_meta-constructor(mc, M)) +/// U(S(c, M), S(c, p)) for some c ϵ mc'` +/// Note: in the case of an uninhabited type, there won't be any `mc'` so this just returns false. +/// +/// Note that the termination of the algorithm now depends on the behaviour of the splitting +/// phase. However, from the third property of the splitting function, +/// we can see that the depth of splitting of the algorithm is bounded by some +/// function of the depths of the patterns fed to it initially. So we're confident that +/// it terminates. +/// +/// This algorithm is equivalent to the one presented in the paper if we only consider +/// wildcards. Thus this mostly extends the original algorithm to ranges and variable-length +/// slices, while removing the special-casing of the wildcard pattern. We also additionally +/// support uninhabited types. +/// +/// +/// # Handling of missing constructors (MISSING-CTOR) +/// +/// This algorithm can be seen to essentially explore the full decision tree of patterns, except in +/// the case where it "takes a shortcut" to find a "missing" constructor - that is, a constructor +/// that is not covered by any of the non-wildcard patterns. +/// +/// For example, in the following case: +/// ```rust +/// match x { +/// (_, false) => {} +/// (None, false) => {} +/// } +/// ``` +/// +/// The algorithm proceeds as follows: +/// ``` +/// M = [ [(_, false)], +/// [(None, false)]] +/// p = [_] +/// -- expand tuple (ctor = Simple) +/// M = [ [_, false], +/// [None, false]] +/// p = [_, _] +/// -- `Some(_)` is a missing ctor, dropping all the non-wildcard arms +/// M = [ [false]] +/// p = [_] +/// -- `true` is a possible witness +/// M = [] +/// p = [] +/// return "[]" +/// return "[true]" +/// return "[Some(_), true]" +/// return "[(Some(_), true)]" +/// ``` +/// +/// Once it finds that `Some(_)` is a missing constructor, it does not need to look any further - +/// any witness using a non-missing constructor can be transformed to a witness using a missing +/// constructor - and therefore it does not try to look for witnesses involving the other +/// constructors - in this case, the `(None, true)` witness (which can be "transformed" to +/// `(Some(_), true)`). +/// +/// In the code, missing constructors are represented by the `Wildcard` and `MissingConstructors` +/// variants of `Constructor`, with the difference between them being down to error reporting: +/// `MissingConstructors` "remembers" the set of constructors it contains for error reporting (so +/// we can show the `... not covered` error message), while `Wildcard` doesn't. +/// +/// Therefore, `Wildcard` is used in cases where the exact constructor doesn't matter - either +/// where the head column of the matrix contains only wildcards (and therefore, *every* constructor +/// will work) or when the enum is `#[non_exhaustive]`, and therefore from a user POV there can +/// always assumed to be an "fresh" constructor that will be useful for the witness. +/// `MissingConstructors` is used in the other cases. use self::Constructor::*; use self::Usefulness::*; use self::WitnessPreference::*; -use rustc_data_structures::fx::FxHashMap; +use rustc_data_structures::fx::FxHashSet; use rustc_index::vec::Idx; +use super::{compare_const_vals, PatternFoldable, PatternFolder}; use super::{FieldPat, Pat, PatKind, PatRange}; -use super::{PatternFoldable, PatternFolder, compare_const_vals}; use rustc::hir::def_id::DefId; use rustc::hir::RangeEnd; -use rustc::ty::{self, Ty, TyCtxt, TypeFoldable, Const}; -use rustc::ty::layout::{Integer, IntegerExt, VariantIdx, Size}; +use rustc::ty::layout::{Integer, IntegerExt, Size, VariantIdx}; +use rustc::ty::{self, Const, Ty, TyCtxt, TypeFoldable}; +use rustc::mir::interpret::{truncate, AllocId, ConstValue, Pointer, Scalar}; use rustc::mir::Field; -use rustc::mir::interpret::{ConstValue, Scalar, truncate, AllocId, Pointer}; +use rustc::util::captures::Captures; use rustc::util::common::ErrorReported; use syntax::attr::{SignedInt, UnsignedInt}; @@ -180,13 +300,13 @@ use syntax_pos::{Span, DUMMY_SP}; use arena::TypedArena; -use smallvec::{SmallVec, smallvec}; -use std::cmp::{self, Ordering, min, max}; +use smallvec::{smallvec, SmallVec}; +use std::cmp::{self, max, min, Ordering}; +use std::convert::TryInto; use std::fmt; use std::iter::{FromIterator, IntoIterator}; use std::ops::RangeInclusive; use std::u128; -use std::convert::TryInto; pub fn expand_pattern<'a, 'tcx>(cx: &MatchCheckCtxt<'a, 'tcx>, pat: Pat<'tcx>) -> &'a Pat<'tcx> { cx.pattern_arena.alloc(LiteralExpander { tcx: cx.tcx }.fold_pattern(&pat)) @@ -215,11 +335,8 @@ impl LiteralExpander<'tcx> { // the easy case, deref a reference (ConstValue::Scalar(Scalar::Ptr(p)), x, y) if x == y => { let alloc = self.tcx.alloc_map.lock().unwrap_memory(p.alloc_id); - ConstValue::ByRef { - alloc, - offset: p.offset, - } - }, + ConstValue::ByRef { alloc, offset: p.offset } + } // unsize array to slice if pattern is array but match value or other patterns are slice (ConstValue::Scalar(Scalar::Ptr(p)), ty::Array(t, n), ty::Slice(u)) => { assert_eq!(t, u); @@ -228,12 +345,11 @@ impl LiteralExpander<'tcx> { start: p.offset.bytes().try_into().unwrap(), end: n.eval_usize(self.tcx, ty::ParamEnv::empty()).try_into().unwrap(), } - }, + } // fat pointers stay the same - | (ConstValue::Slice { .. }, _, _) + (ConstValue::Slice { .. }, _, _) | (_, ty::Slice(_), ty::Slice(_)) - | (_, ty::Str, ty::Str) - => val, + | (_, ty::Str, ty::Str) => val, // FIXME(oli-obk): this is reachable for `const FOO: &&&u32 = &&&42;` being used _ => bug!("cannot deref {:#?}, {} -> {}", val, crty, rty), } @@ -246,84 +362,177 @@ impl PatternFolder<'tcx> for LiteralExpander<'tcx> { match (&pat.ty.kind, &*pat.kind) { ( &ty::Ref(_, rty, _), - &PatKind::Constant { value: Const { - val, - ty: ty::TyS { kind: ty::Ref(_, crty, _), .. }, - } }, - ) => { - Pat { - ty: pat.ty, - span: pat.span, - kind: box PatKind::Deref { - subpattern: Pat { - ty: rty, - span: pat.span, - kind: box PatKind::Constant { value: self.tcx.mk_const(Const { + &PatKind::Constant { + value: Const { val, ty: ty::TyS { kind: ty::Ref(_, crty, _), .. } }, + }, + ) => Pat { + ty: pat.ty, + span: pat.span, + kind: box PatKind::Deref { + subpattern: Pat { + ty: rty, + span: pat.span, + kind: box PatKind::Constant { + value: self.tcx.mk_const(Const { val: self.fold_const_value_deref(*val, rty, crty), ty: rty, - }) }, - } - } - } - } - (_, &PatKind::Binding { subpattern: Some(ref s), .. }) => { - s.fold_with(self) - } - _ => pat.super_fold_with(self) + }), + }, + }, + }, + }, + (_, &PatKind::Binding { subpattern: Some(ref s), .. }) => s.fold_with(self), + _ => pat.super_fold_with(self), } } } -impl<'tcx> Pat<'tcx> { - fn is_wildcard(&self) -> bool { - match *self.kind { - PatKind::Binding { subpattern: None, .. } | PatKind::Wild => - true, - _ => false +/// A row of a matrix. +#[derive(Debug, Clone)] +pub struct PatStack<'p, 'tcx> { + // Rows of len 1 are very common, which is why `SmallVec[_; 2]` works well. + patterns: SmallVec<[&'p Pat<'tcx>; 2]>, + // This caches the invocation of `pat_constructors` on the head of the stack. We avoid mutating + // `self` to be sure we don't keep an invalid cache around. Must be non-empty unless `patterns` + // is empty. + head_ctor: Option>, +} + +impl<'p, 'tcx> PatStack<'p, 'tcx> { + pub fn from_pattern(cx: &MatchCheckCtxt<'_, 'tcx>, pat: &'p Pat<'tcx>) -> Self { + PatStack::from_vec(cx, smallvec![pat]) + } + + fn empty() -> Self { + PatStack { patterns: smallvec![], head_ctor: None } + } + + fn from_vec(cx: &MatchCheckCtxt<'_, 'tcx>, patterns: SmallVec<[&'p Pat<'tcx>; 2]>) -> Self { + if patterns.is_empty() { + return PatStack::empty(); } + let head_ctor = Some(pat_constructors(cx.tcx, cx.param_env, patterns[0])); + PatStack { patterns, head_ctor } + } + + fn from_slice(cx: &MatchCheckCtxt<'_, 'tcx>, s: &[&'p Pat<'tcx>]) -> Self { + PatStack::from_vec(cx, SmallVec::from_slice(s)) + } + + fn is_empty(&self) -> bool { + self.patterns.is_empty() + } + + fn len(&self) -> usize { + self.patterns.len() + } + + fn head<'a>(&'a self) -> &'p Pat<'tcx> { + self.patterns[0] + } + + fn head_ctors(&self) -> &Constructor<'tcx> { + self.head_ctor.as_ref().unwrap() + } + + fn iter(&self) -> impl Iterator> { + self.patterns.iter().map(|p| *p) + } + + /// This computes `S(constructor, self)`. See top of the file for explanations. + fn specialize<'a, 'q>( + &self, + cx: &MatchCheckCtxt<'a, 'tcx>, + constructor: &Constructor<'tcx>, + ctor_wild_subpatterns: &[&'q Pat<'tcx>], + ) -> Option> + where + 'a: 'q, + 'p: 'q, + { + let new_head = specialize_one_pattern(cx, self.head(), constructor, ctor_wild_subpatterns); + let result = new_head.map(|new_head| { + let mut pats = new_head.patterns; + pats.extend_from_slice(&self.patterns[1..]); + PatStack::from_vec(cx, pats) + }); + debug!("specialize({:#?}, {:#?}) = {:#?}", self, constructor, result); + result } } -/// A 2D matrix. Nx1 matrices are very common, which is why `SmallVec[_; 2]` -/// works well for each row. -pub struct Matrix<'p, 'tcx>(Vec; 2]>>); +impl<'p, 'tcx> Default for PatStack<'p, 'tcx> { + fn default() -> Self { + PatStack::empty() + } +} + +/// A 2D matrix. +pub struct Matrix<'p, 'tcx>(Vec>); impl<'p, 'tcx> Matrix<'p, 'tcx> { pub fn empty() -> Self { Matrix(vec![]) } - pub fn push(&mut self, row: SmallVec<[&'p Pat<'tcx>; 2]>) { + pub fn push(&mut self, row: PatStack<'p, 'tcx>) { self.0.push(row) } + + /// Iterate over the first component of each row + fn heads<'a>(&'a self) -> impl Iterator> + Captures<'p> { + self.0.iter().map(|r| r.head()) + } + + fn head_ctors(&self) -> Vec<&Constructor<'tcx>> { + self.0.iter().map(|r| r.head_ctors()).filter(|ctor| !ctor.is_wildcard()).collect() + } + + /// This computes `S(constructor, self)`. See top of the file for explanations. + fn specialize<'a, 'q>( + &self, + cx: &MatchCheckCtxt<'a, 'tcx>, + constructor: &Constructor<'tcx>, + ctor_wild_subpatterns: &[&'q Pat<'tcx>], + ) -> Matrix<'q, 'tcx> + where + 'a: 'q, + 'p: 'q, + { + Matrix( + self.0 + .iter() + .filter_map(|r| r.specialize(cx, constructor, ctor_wild_subpatterns)) + .collect(), + ) + } } /// Pretty-printer for matrices of patterns, example: -/// ++++++++++++++++++++++++++ -/// + _ + [] + -/// ++++++++++++++++++++++++++ -/// + true + [First] + -/// ++++++++++++++++++++++++++ -/// + true + [Second(true)] + -/// ++++++++++++++++++++++++++ -/// + false + [_] + -/// ++++++++++++++++++++++++++ -/// + _ + [_, _, ..tail] + -/// ++++++++++++++++++++++++++ +/// +++++++++++++++++++++++++++++ +/// + _ + [] + +/// +++++++++++++++++++++++++++++ +/// + true + [First] + +/// +++++++++++++++++++++++++++++ +/// + true + [Second(true)] + +/// +++++++++++++++++++++++++++++ +/// + false + [_] + +/// +++++++++++++++++++++++++++++ +/// + _ + [_, _, tail @ ..] + +/// +++++++++++++++++++++++++++++ impl<'p, 'tcx> fmt::Debug for Matrix<'p, 'tcx> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { write!(f, "\n")?; let &Matrix(ref m) = self; - let pretty_printed_matrix: Vec> = m.iter().map(|row| { - row.iter().map(|pat| format!("{:?}", pat)).collect() - }).collect(); + let pretty_printed_matrix: Vec> = + m.iter().map(|row| row.iter().map(|pat| format!("{:?}", pat)).collect()).collect(); let column_count = m.iter().map(|row| row.len()).max().unwrap_or(0); assert!(m.iter().all(|row| row.len() == column_count)); - let column_widths: Vec = (0..column_count).map(|col| { - pretty_printed_matrix.iter().map(|row| row[col].len()).max().unwrap_or(0) - }).collect(); + let column_widths: Vec = (0..column_count) + .map(|col| pretty_printed_matrix.iter().map(|row| row[col].len()).max().unwrap_or(0)) + .collect(); let total_width = column_widths.iter().cloned().sum::() + column_count * 3 + 1; let br = "+".repeat(total_width); @@ -342,9 +551,10 @@ impl<'p, 'tcx> fmt::Debug for Matrix<'p, 'tcx> { } } -impl<'p, 'tcx> FromIterator; 2]>> for Matrix<'p, 'tcx> { +impl<'p, 'tcx> FromIterator> for Matrix<'p, 'tcx> { fn from_iter(iter: T) -> Self - where T: IntoIterator; 2]>> + where + T: IntoIterator>, { Matrix(iter.into_iter().collect()) } @@ -356,12 +566,11 @@ pub struct MatchCheckCtxt<'a, 'tcx> { /// checking inhabited-ness of types because whether a type is (visibly) /// inhabited can depend on whether it was defined in the current module or /// not. E.g., `struct Foo { _private: ! }` cannot be seen to be empty - /// outside it's module and should not be matchable with an empty match + /// outside its module and should not be matchable with an empty match /// statement. pub module: DefId, param_env: ty::ParamEnv<'tcx>, pub pattern_arena: &'a TypedArena>, - pub byte_array_map: FxHashMap<*const Pat<'tcx>, Vec<&'a Pat<'tcx>>>, } impl<'a, 'tcx> MatchCheckCtxt<'a, 'tcx> { @@ -376,13 +585,7 @@ impl<'a, 'tcx> MatchCheckCtxt<'a, 'tcx> { { let pattern_arena = TypedArena::default(); - f(MatchCheckCtxt { - tcx, - param_env, - module, - pattern_arena: &pattern_arena, - byte_array_map: FxHashMap::default(), - }) + f(MatchCheckCtxt { tcx, param_env, module, pattern_arena: &pattern_arena }) } fn is_uninhabited(&self, ty: Ty<'tcx>) -> bool { @@ -418,8 +621,10 @@ impl<'a, 'tcx> MatchCheckCtxt<'a, 'tcx> { } } +/// Constructors, including base constructors and meta-constructors. #[derive(Clone, Debug, PartialEq)] enum Constructor<'tcx> { + // Base constructors /// The constructor of all patterns that don't vary by constructor, /// e.g., struct patterns and fixed-length arrays. Single, @@ -427,16 +632,39 @@ enum Constructor<'tcx> { Variant(DefId), /// Literal values. ConstantValue(&'tcx ty::Const<'tcx>), - /// Ranges of literal values (`2..=5` and `2..5`). - ConstantRange(u128, u128, Ty<'tcx>, RangeEnd), /// Array patterns of length n. - Slice(u64), + FixedLenSlice(u64), + + // Meta-constructors + /// Ranges of integer literal values (`2..=5` and `2..5`). + IntRange(IntRange<'tcx>), + /// Ranges of non-integer literal values (`2.0..=5.2`). + ConstantRange(&'tcx ty::Const<'tcx>, &'tcx ty::Const<'tcx>, RangeEnd), + /// Slice patterns. Captures any array constructor of length >= i+j. + VarLenSlice(u64, u64), + /// Wildcard meta-constructor. Captures all possible constructors for a given type. + Wildcard, + /// Special wildcard-like constructor that carries only a subset of all possible constructors. + /// It is used only when splitting `Constructor::Wildcard` and some constructors were not + /// present in the matrix. + /// The contained list must be nonempty. + MissingConstructors(MissingConstructors<'tcx>), } impl<'tcx> Constructor<'tcx> { fn is_slice(&self) -> bool { match self { - Slice { .. } => true, + FixedLenSlice(..) | VarLenSlice(..) => true, + _ => false, + } + } + + fn is_wildcard(&self) -> bool { + match self { + Wildcard => true, + MissingConstructors(_) => bug!( + "not sure if MissingConstructors should be a wildcard; shouldn't happen anyways." + ), _ => false, } } @@ -447,29 +675,646 @@ impl<'tcx> Constructor<'tcx> { adt: &'tcx ty::AdtDef, ) -> VariantIdx { match self { - &Variant(id) => adt.variant_index_with_id(id), - &Single => { + Variant(id) => adt.variant_index_with_id(*id), + Single => { assert!(!adt.is_enum()); VariantIdx::new(0) } - &ConstantValue(c) => crate::const_eval::const_variant_index(cx.tcx, cx.param_env, c), - _ => bug!("bad constructor {:?} for adt {:?}", self, adt) + ConstantValue(c) => crate::const_eval::const_variant_index(cx.tcx, cx.param_env, c), + _ => bug!("bad constructor {:?} for adt {:?}", self, adt), + } + } + + /// Split a constructor into equivalence classes of constructors that behave the same + /// for the given matrix. See description of the algorithm for details. + /// Note: We can rely on this returning an empty list if the type is (visibly) uninhabited. + fn split_meta_constructor( + &self, + cx: &MatchCheckCtxt<'_, 'tcx>, + ty: Ty<'tcx>, + head_ctors: &Vec<&Constructor<'tcx>>, + ) -> SmallVec<[Constructor<'tcx>; 1]> { + debug!("split_meta_constructor {:?}", self); + assert!(!head_ctors.iter().any(|c| c.is_wildcard())); + + match *self { + // Any base constructor can be used unchanged. + Single | Variant(_) | ConstantValue(_) | FixedLenSlice(_) => smallvec![self.clone()], + IntRange(ref ctor_range) if IntRange::should_treat_range_exhaustively(cx.tcx, ty) => { + // Splitting up a range naïvely would mean creating a separate constructor for + // every single value in the range, which is clearly impractical. We therefore want + // to keep together subranges for which the specialisation will be identical across + // all values in that range. These classes are grouped by the patterns that apply + // to them (in the matrix `M`). We can split the range whenever the patterns that + // apply to that range (specifically: the patterns that *intersect* with that + // range) change. Our solution, therefore, is to split the range constructor into + // subranges at every single point where the group of intersecting patterns changes + // (using the method described below). The nice thing about this is that the number + // of subranges is linear in the number of rows in the matrix (i.e., the number of + // cases in the `match` statement), so we don't need to be worried about matching + // over a gargantuan number of ranges. + // + // Essentially, given the first column of a matrix representing ranges, that looks + // like the following: + // + // |------| |----------| |-------| || + // |-------| |-------| |----| || + // |---------| + // + // We split the ranges up into equivalence classes so the ranges are no longer + // overlapping: + // + // |--|--|||-||||--||---|||-------| |-|||| || + // + // The logic for determining how to split the ranges is fairly straightforward: we + // calculate boundaries for each interval range, sort them, then create + // constructors for each new interval between every pair of boundary points. (This + // essentially amounts to performing the intuitive merging operation depicted + // above.) + + /// Represents a border between 2 integers. Because the intervals spanning borders + /// must be able to cover every integer, we need to be able to represent + /// 2^128 + 1 such borders. + #[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] + enum Border { + JustBefore(u128), + AfterMax, + } + + // A function for extracting the borders of an integer interval. + fn range_borders(r: IntRange<'_>) -> impl Iterator { + let (lo, hi) = r.range.into_inner(); + let from = Border::JustBefore(lo); + let to = match hi.checked_add(1) { + Some(m) => Border::JustBefore(m), + None => Border::AfterMax, + }; + vec![from, to].into_iter() + } + + // `borders` is the set of borders between equivalence classes: each equivalence + // class lies between 2 borders. + let row_borders = head_ctors + .iter() + .map(|c| *c) + .filter_map(IntRange::from_ctor) + .filter_map(|range| ctor_range.intersection(cx.tcx, &range)) + .flat_map(|range| range_borders(range)); + let ctor_borders = range_borders(ctor_range.clone()); + let mut borders: Vec<_> = row_borders.chain(ctor_borders).collect(); + borders.sort_unstable(); + + // We're going to iterate through every adjacent pair of borders, making sure that + // each represents an interval of nonnegative length, and convert each such + // interval into a constructor. + borders + .windows(2) + .filter_map(|window| match (window[0], window[1]) { + (Border::JustBefore(n), Border::JustBefore(m)) => { + if n < m { + Some(n..=(m - 1)) + } else { + None + } + } + (Border::JustBefore(n), Border::AfterMax) => Some(n..=u128::MAX), + (Border::AfterMax, _) => None, + }) + .map(|range| IntRange::new(ty, range)) + .map(IntRange) + .collect() + } + // When not treated exhaustively, don't split ranges. + ConstantRange(..) | IntRange(..) => smallvec![self.clone()], + VarLenSlice(self_prefix, self_suffix) => { + // A variable-length slice pattern is matched by an infinite collection of + // fixed-length array patterns. However it turns out that for each finite set of + // patterns `P`, all sufficiently large array lengths are equivalent. + // + // Each slice `s` with a "sufficiently-large" length `l ≥ L` that applies + // to exactly the subset `Pₜ` of `P` can be transformed to a slice + // `sₘ` for each sufficiently-large length `m` that applies to exactly + // the same subset of `P`. + // + // Because of that, each witness for reachability-checking from one of the + // sufficiently-large lengths can be transformed to an equally-valid witness from + // any other length, so we all slice lengths from the "minimal sufficiently-large + // length" until infinity will behave the same. + // + // Note that the fact that there is a *single* `sₘ` for each `m`, + // not depending on the specific pattern in `P`, is important: if + // you look at the pair of patterns + // `[true, ..]` + // `[.., false]` + // Then any slice of length ≥1 that matches one of these two + // patterns can be trivially turned to a slice of any + // other length ≥1 that matches them and vice-versa - + // but the slice from length 2 `[false, true]` that matches neither + // of these patterns can't be turned to a slice from length 1 that + // matches neither of these patterns, so we have to consider + // slices from length 2 there. + // + // Now, to see that that length exists and find it, observe that slice + // patterns are either "fixed-length" patterns (`[_, _, _]`) or + // "variable-length" patterns (`[_, .., _]`). + // + // For fixed-length patterns, all slices with lengths *longer* than + // the pattern's length have the same outcome (of not matching), so + // as long as `L` is greater than the pattern's length we can pick + // any `sₘ` from that length and get the same result. + // + // For variable-length patterns, the situation is more complicated, + // because as seen above the precise value of `sₘ` matters. + // + // However, for each variable-length pattern `p` with a prefix of length + // `plₚ` and suffix of length `slₚ`, only the first `plₚ` and the last + // `slₚ` elements are examined. + // + // Therefore, as long as `L` is positive (to avoid concerns about empty + // types), all elements after the maximum prefix length and before + // the maximum suffix length are not examined by any variable-length + // pattern, and therefore can be added/removed without affecting + // them - creating equivalent patterns from any sufficiently-large + // length. + // + // Of course, if fixed-length patterns exist, we must be sure + // that our length is large enough to miss them all, so + // we can pick `L = max(max(FIXED_LEN)+1, max(PREFIX_LEN) + max(SUFFIX_LEN))` + // + // For example, with the above pair of patterns, all elements + // but the first and last can be added/removed, so any + // witness of length ≥2 (say, `[false, false, true]`) can be + // turned to a witness from any other length ≥2. + // + // For diagnostics, we keep the prefix and suffix lengths separate, so in the case + // where `max(FIXED_LEN)+1` is the largest, we adapt `max(PREFIX_LEN)` accordingly, + // so that `max(PREFIX_LEN) + max(SUFFIX_LEN) = L`. + + let mut max_prefix_len = self_prefix; + let mut max_suffix_len = self_suffix; + let mut max_fixed_len = 0; + + for ctor in head_ctors { + match **ctor { + ConstantValue(value) => { + // Extract the length of an array/slice from a constant + match (value.val, &value.ty.kind) { + (_, ty::Array(_, n)) => { + max_fixed_len = + cmp::max(max_fixed_len, n.eval_usize(cx.tcx, cx.param_env)) + } + (ConstValue::Slice { start, end, .. }, ty::Slice(_)) => { + max_fixed_len = cmp::max(max_fixed_len, (end - start) as u64) + } + _ => {} + } + } + FixedLenSlice(len) => { + max_fixed_len = cmp::max(max_fixed_len, len); + } + VarLenSlice(prefix, suffix) => { + max_prefix_len = cmp::max(max_prefix_len, prefix); + max_suffix_len = cmp::max(max_suffix_len, suffix); + } + _ => {} + } + } + + if max_fixed_len + 1 >= max_prefix_len + max_suffix_len { + max_prefix_len = cmp::max(max_prefix_len, max_fixed_len + 1 - max_suffix_len); + } + + (self_prefix + self_suffix..max_prefix_len + max_suffix_len) + .map(FixedLenSlice) + .chain(Some(VarLenSlice(max_prefix_len, max_suffix_len))) + .collect() + } + Wildcard => { + let is_declared_nonexhaustive = !cx.is_local(ty) && cx.is_non_exhaustive_enum(ty); + + // `all_ctors` is the list of all the constructors for the given type. + let all_ctors = all_constructors(cx, ty); + + let is_privately_empty = all_ctors.is_empty() && !cx.is_uninhabited(ty); + + // For privately empty and non-exhaustive enums, we work as if there were an "extra" + // `_` constructor for the type, so we can never match over all constructors. + // See the `match_privately_empty` test for details. + // + // FIXME: currently the only way I know of something can + // be a privately-empty enum is when the exhaustive_patterns + // feature flag is not present, so this is only + // needed for that case. + let is_non_exhaustive = is_privately_empty + || is_declared_nonexhaustive + || (ty.is_ptr_sized_integral() + && !cx.tcx.features().precise_pointer_size_matching); + + // `missing_ctors` is the set of constructors from the same type as the + // first column of `matrix` that are matched only by wildcard patterns + // from the first column. + // + // Therefore, if there is some pattern that is unmatched by `matrix`, + // it will still be unmatched if the first constructor is replaced by + // any of the constructors in `missing_ctors` + let missing_ctors = MissingConstructors::new( + cx.tcx, + cx.param_env, + all_ctors, + head_ctors.iter().map(|c| (**c).clone()).collect(), + ); + debug!( + "missing_ctors.is_empty()={:#?} is_non_exhaustive={:#?}", + missing_ctors.is_empty(), + is_non_exhaustive, + ); + + // If there are some missing constructors, we only need to specialize relative + // to them and we can ignore the other ones. Otherwise, we have to try all + // existing constructors one-by-one. + if is_non_exhaustive { + // We pretend the type has an additional `_` constructor, that counts as a + // missing constructor. So we return that constructor. + smallvec![Wildcard] + } else if !missing_ctors.is_empty() { + if head_ctors.is_empty() { + // If head_ctors is empty, then all constructors of the type behave the same + // so we can keep the Wildcard meta-constructor. + smallvec![Wildcard] + } else { + // Otherwise, we have a set of missing constructors that is neither empty + // not equal to all_constructors. Since all missing constructors will + // behave the same (i.e. will be matched only by wildcards), we return a + // meta-constructor that contains all of them at once. + smallvec![MissingConstructors(missing_ctors)] + } + } else { + // Here we know there are no missing constructors, so we have to try all + // existing constructors one-by-one. + let (all_ctors, _) = missing_ctors.into_inner(); + // Recursively split newly generated list of constructors. This list must not + // contain any wildcards so we don't recurse infinitely. + all_ctors + .into_iter() + .flat_map(|ctor| ctor.split_meta_constructor(cx, ty, head_ctors)) + .collect() + } + } + MissingConstructors(_) => bug!("shouldn't try to split constructor {:?}", self), + } + } + + /// Returns a collection of constructors that spans the constructors covered by `self`, + /// subtracted by the constructors covered by `head_ctors`: i.e., `self \ head_ctors` (in set + /// notation). + fn subtract_meta_constructor( + self, + _tcx: TyCtxt<'tcx>, + _param_env: ty::ParamEnv<'tcx>, + used_ctors: &Vec>, + ) -> SmallVec<[Constructor<'tcx>; 1]> { + debug!("subtract_meta_constructor {:?}", self); + // The input must not contain a wildcard + assert!(!used_ctors.iter().any(|c| c.is_wildcard())); + + match self { + // Those constructors can't intersect with a non-wildcard meta-constructor, so we're + // fine just comparing for equality. + Single | Variant(_) | ConstantRange(..) | ConstantValue(..) => { + if used_ctors.iter().any(|c| c == &self) { smallvec![] } else { smallvec![self] } + } + FixedLenSlice(self_len) => { + let overlaps = |c: &Constructor<'_>| match c { + FixedLenSlice(other_len) => *other_len == self_len, + VarLenSlice(prefix, suffix) => prefix + suffix <= self_len, + _ => false, + }; + if used_ctors.iter().any(overlaps) { smallvec![] } else { smallvec![self] } + } + VarLenSlice(self_prefix, self_suffix) => { + // Assume we have the following match: + // ``` + // match slice { + // [0] => {} + // [_, _, _] => {} + // [1, 2, 3, 4, 5, 6, ..] => {} + // [_, _, _, _, _, _, _] => {} + // [0, ..] => {} + // } + // ``` + // We want to know which constructors are matched by the last pattern, but are not + // matched by the first four ones. Since we only speak of constructors here, we + // only care about the length of the slices and not the particular subpatterns. + // For that, we first notice that because of the third pattern, all constructors of + // lengths 6 or more are covered already. `max_len` will be `Some(6)`. + // Then we'll look at fixed-length constructors to see which are missing. The + // returned list of constructors will be those of lengths in 1..6 that are not + // present in the match. Lengths 1, 3 and 7 are matched already, so we get + // `[FixedLenSlice(2), FixedLenSlice(4), FixedLenSlice(5)]`. + // If we had removed the third pattern, we would have instead returned + // `[FixedLenSlice(2), FixedLenSlice(4), FixedLenSlice(5), FixedLenSlice(6), + // VarLenSlice(8, 0)]`. + + // Initially we cover all slice lengths starting from self_len. + let self_len = self_prefix + self_suffix; + + // If there is a VarLenSlice(n) in used_ctors, then we have to discard + // all lengths >= n. So we pick the smallest such n. + let max_len: Option<_> = used_ctors + .iter() + .filter_map(|c: &Constructor<'tcx>| match c { + VarLenSlice(prefix, suffix) => Some(prefix + suffix), + _ => None, + }) + .min(); + + // The remaining range of lengths is now either `self_len..` + // or `self_len..max_len`. We then remove from that range all the + // individual FixedLenSlice lengths in used_ctors. + + // If max_len <= self_len there are no lengths remaining. + if let Some(max_len) = max_len { + if max_len <= self_len { + return smallvec![]; + } + } + + // Extract fixed-size lengths + let used_fixed_lengths: FxHashSet = used_ctors + .iter() + .filter_map(|c: &Constructor<'tcx>| match c { + FixedLenSlice(len) => Some(*len), + _ => None, + }) + .collect(); + + if let Some(max_len) = max_len { + (self_len..max_len) + .filter(|len| !used_fixed_lengths.contains(len)) + .map(FixedLenSlice) + .collect() + } else { + // Choose a length for which we know that all larger lengths remain in the + // output. + let min_free_length = used_fixed_lengths + .iter() + .map(|len| len + 1) + .chain(Some(self_len)) + .max() + .unwrap(); + + // We know min_free_length >= self_len >= self_suffix so this can't underflow. + let final_varlen = VarLenSlice(min_free_length - self_suffix, self_suffix); + + (self_len..min_free_length) + .filter(|len| !used_fixed_lengths.contains(len)) + .map(FixedLenSlice) + .chain(Some(final_varlen)) + .collect() + } + } + IntRange(range) => { + let used_ranges = used_ctors.iter().filter_map(IntRange::from_ctor); + let mut remaining_ranges: SmallVec<[IntRange<'tcx>; 1]> = smallvec![range]; + + // For each used ctor, subtract from the current set of constructors. + for used_range in used_ranges { + remaining_ranges = remaining_ranges + .into_iter() + .flat_map(|range| used_range.subtract_from(range)) + .collect(); + + // If the constructors that have been considered so far already cover + // the entire range of `self`, no need to look at more constructors. + if remaining_ranges.is_empty() { + break; + } + } + + remaining_ranges.into_iter().map(IntRange).collect() + } + Wildcard | MissingConstructors(_) => { + bug!("shouldn't try to subtract constructor {:?}", self) + } + } + } + + /// This returns one wildcard pattern for each argument to this constructor. + fn wildcard_subpatterns<'a>( + &self, + cx: &MatchCheckCtxt<'a, 'tcx>, + ty: Ty<'tcx>, + ) -> impl Iterator> + DoubleEndedIterator { + debug!("wildcard_subpatterns({:#?}, {:?})", self, ty); + let subpattern_types = match *self { + Single | Variant(_) => match ty.kind { + ty::Tuple(ref fs) => fs.into_iter().map(|t| t.expect_ty()).collect(), + ty::Ref(_, rty, _) => vec![rty], + ty::Adt(adt, substs) => { + if adt.is_box() { + // Use T as the sub pattern type of Box. + vec![substs.type_at(0)] + } else { + adt.variants[self.variant_index_for_adt(cx, adt)] + .fields + .iter() + .map(|field| { + let is_visible = adt.is_enum() + || field.vis.is_accessible_from(cx.module, cx.tcx); + if is_visible { + let ty = field.ty(cx.tcx, substs); + match ty.kind { + // If the field type returned is an array of an unknown + // size return an TyErr. + ty::Array(_, len) + if len + .try_eval_usize(cx.tcx, cx.param_env) + .is_none() => + { + cx.tcx.types.err + } + _ => ty, + } + } else { + // Treat all non-visible fields as TyErr. They + // can't appear in any other pattern from + // this match (because they are private), + // so their type does not matter - but + // we don't want to know they are + // uninhabited. + cx.tcx.types.err + } + }) + .collect() + } + } + ty::Slice(ty) | ty::Array(ty, _) => bug!("bad slice pattern {:?} {:?}", self, ty), + _ => vec![], + }, + FixedLenSlice(length) => match ty.kind { + ty::Slice(ty) | ty::Array(ty, _) => (0..length).map(|_| ty).collect(), + _ => bug!("bad slice pattern {:?} {:?}", self, ty), + }, + VarLenSlice(prefix, suffix) => match ty.kind { + ty::Slice(ty) | ty::Array(ty, _) => (0..prefix + suffix).map(|_| ty).collect(), + _ => bug!("bad slice pattern {:?} {:?}", self, ty), + }, + ConstantValue(_) + | MissingConstructors(_) + | ConstantRange(..) + | IntRange(..) + | Wildcard => vec![], + }; + + subpattern_types.into_iter().map(|ty| Pat { ty, span: DUMMY_SP, kind: box PatKind::Wild }) + } + + /// This computes the arity of a constructor. The arity of a constructor + /// is the number of its arguments. + /// + /// For instance, a tuple pattern `(_, 42, Some([]))` has arity 3, a struct pattern's arity is + /// the number of fields it contains, etc. + fn arity<'a>(&self, cx: &MatchCheckCtxt<'a, 'tcx>, ty: Ty<'tcx>) -> u64 { + debug!("Constructor::arity({:#?}, {:?})", self, ty); + match *self { + Single | Variant(_) => match ty.kind { + ty::Tuple(ref fs) => fs.len() as u64, + ty::Ref(..) => 1, + ty::Adt(adt, _) => { + adt.variants[self.variant_index_for_adt(cx, adt)].fields.len() as u64 + } + ty::Slice(..) | ty::Array(..) => bug!("bad slice pattern {:?} {:?}", self, ty), + _ => 0, + }, + FixedLenSlice(length) => length, + VarLenSlice(prefix, suffix) => prefix + suffix, + ConstantValue(_) + | ConstantRange(..) + | IntRange(..) + | Wildcard + | MissingConstructors(_) => 0, } } + + /// Apply a constructor to a list of patterns, yielding a new pattern. `pats` + /// must have as many elements as this constructor's arity. + /// + /// Examples: + /// `self`: `Constructor::Single` + /// `ty`: `(u32, u32, u32)` + /// `pats`: `[10, 20, _]` + /// returns `(10, 20, _)` + /// + /// `self`: `Constructor::Variant(Option::Some)` + /// `ty`: `Option` + /// `pats`: `[false]` + /// returns `Some(false)` + fn apply<'a>( + &self, + cx: &MatchCheckCtxt<'a, 'tcx>, + ty: Ty<'tcx>, + pats: impl IntoIterator>, + ) -> SmallVec<[Pat<'tcx>; 1]> { + let mut pats = pats.into_iter(); + let pat = match *self { + Single | Variant(_) => match ty.kind { + ty::Adt(..) | ty::Tuple(..) => { + let subpatterns = pats + .enumerate() + .map(|(i, p)| FieldPat { field: Field::new(i), pattern: p }) + .collect(); + + match ty.kind { + ty::Adt(adt_def, substs) if adt_def.is_enum() => PatKind::Variant { + adt_def, + substs, + variant_index: self.variant_index_for_adt(cx, adt_def), + subpatterns, + }, + _ => PatKind::Leaf { subpatterns }, + } + } + ty::Ref(..) => PatKind::Deref { subpattern: pats.nth(0).unwrap() }, + _ => PatKind::Wild, + }, + FixedLenSlice(_) => { + PatKind::Slice { prefix: pats.collect(), slice: None, suffix: vec![] } + } + VarLenSlice(prefix_len, _suffix_len) => match ty.kind { + ty::Slice(ty) | ty::Array(ty, _) => { + let prefix = pats.by_ref().take(prefix_len as usize).collect(); + let suffix = pats.collect(); + let wild = Pat { ty, span: DUMMY_SP, kind: Box::new(PatKind::Wild) }; + PatKind::Slice { prefix, slice: Some(wild), suffix } + } + _ => bug!("bad slice pattern {:?} {:?}", self, ty), + }, + ConstantValue(value) => PatKind::Constant { value }, + ConstantRange(lo, hi, end) => PatKind::Range(PatRange { lo, hi, end }), + IntRange(ref range) => range.to_patkind(cx.tcx), + Wildcard => PatKind::Wild, + MissingConstructors(ref missing_ctors) => { + // Construct for each missing constructor a "wildcard" version of this + // constructor, that matches everything that can be built with + // it. For example, if `ctor` is a `Constructor::Variant` for + // `Option::Some`, we get the pattern `Some(_)`. + return missing_ctors + .iter() + .flat_map(|ctor| ctor.apply_wildcards(cx, ty)) + .collect(); + } + }; + + smallvec![Pat { ty, span: DUMMY_SP, kind: Box::new(pat) }] + } + + /// Like `apply`, but where all the subpatterns are wildcards `_`. + fn apply_wildcards<'a>( + &self, + cx: &MatchCheckCtxt<'a, 'tcx>, + ty: Ty<'tcx>, + ) -> SmallVec<[Pat<'tcx>; 1]> { + let pats = self.wildcard_subpatterns(cx, ty).rev(); + self.apply(cx, ty, pats) + } } #[derive(Clone, Debug)] pub enum Usefulness<'tcx> { Useful, UsefulWithWitness(Vec>), - NotUseful + NotUseful, } impl<'tcx> Usefulness<'tcx> { + fn new_useful(preference: WitnessPreference) -> Self { + match preference { + ConstructWitness => UsefulWithWitness(vec![Witness(vec![])]), + LeaveOutWitness => Useful, + } + } + fn is_useful(&self) -> bool { match *self { NotUseful => false, - _ => true + _ => true, + } + } + + fn apply_constructor( + self, + cx: &MatchCheckCtxt<'_, 'tcx>, + ctor: &Constructor<'tcx>, + ty: Ty<'tcx>, + ) -> Self { + match self { + UsefulWithWitness(witnesses) => UsefulWithWitness( + witnesses + .into_iter() + .flat_map(|witness| witness.apply_constructor(cx, &ctor, ty)) + .collect(), + ), + x => x, } } } @@ -477,13 +1322,7 @@ impl<'tcx> Usefulness<'tcx> { #[derive(Copy, Clone, Debug)] pub enum WitnessPreference { ConstructWitness, - LeaveOutWitness -} - -#[derive(Copy, Clone, Debug)] -struct PatCtxt<'tcx> { - ty: Ty<'tcx>, - max_slice_length: u64, + LeaveOutWitness, } /// A witness of non-exhaustiveness for error reporting, represented @@ -527,24 +1366,6 @@ impl<'tcx> Witness<'tcx> { self.0.into_iter().next().unwrap() } - fn push_wild_constructor<'a>( - mut self, - cx: &MatchCheckCtxt<'a, 'tcx>, - ctor: &Constructor<'tcx>, - ty: Ty<'tcx>) - -> Self - { - let sub_pattern_tys = constructor_sub_pattern_tys(cx, ctor, ty); - self.0.extend(sub_pattern_tys.into_iter().map(|ty| { - Pat { - ty, - span: DUMMY_SP, - kind: box PatKind::Wild, - } - })); - self.apply_constructor(cx, ctor, ty) - } - /// Constructs a partial witness for a pattern given a list of /// patterns expanded by the specialization step. /// @@ -553,268 +1374,127 @@ impl<'tcx> Witness<'tcx> { /// of values, V, where each value in that set is not covered by any previously /// used patterns and is covered by the pattern P'. Examples: /// - /// left_ty: tuple of 3 elements + /// ty: tuple of 3 elements /// pats: [10, 20, _] => (10, 20, _) /// - /// left_ty: struct X { a: (bool, &'static str), b: usize} + /// ty: struct X { a: (bool, &'static str), b: usize} /// pats: [(false, "foo"), 42] => X { a: (false, "foo"), b: 42 } fn apply_constructor<'a>( mut self, - cx: &MatchCheckCtxt<'a,'tcx>, + cx: &MatchCheckCtxt<'a, 'tcx>, ctor: &Constructor<'tcx>, - ty: Ty<'tcx>) - -> Self - { - let arity = constructor_arity(cx, ctor, ty); - let pat = { + ty: Ty<'tcx>, + ) -> SmallVec<[Self; 1]> { + let arity = ctor.arity(cx, ty); + let applied_pats = { let len = self.0.len() as u64; - let mut pats = self.0.drain((len - arity) as usize..).rev(); - - match ty.kind { - ty::Adt(..) | - ty::Tuple(..) => { - let pats = pats.enumerate().map(|(i, p)| { - FieldPat { - field: Field::new(i), - pattern: p - } - }).collect(); - - if let ty::Adt(adt, substs) = ty.kind { - if adt.is_enum() { - PatKind::Variant { - adt_def: adt, - substs, - variant_index: ctor.variant_index_for_adt(cx, adt), - subpatterns: pats - } - } else { - PatKind::Leaf { subpatterns: pats } - } - } else { - PatKind::Leaf { subpatterns: pats } - } - } - - ty::Ref(..) => { - PatKind::Deref { subpattern: pats.nth(0).unwrap() } - } - - ty::Slice(_) | ty::Array(..) => { - PatKind::Slice { - prefix: pats.collect(), - slice: None, - suffix: vec![] - } - } - - _ => { - match *ctor { - ConstantValue(value) => PatKind::Constant { value }, - ConstantRange(lo, hi, ty, end) => PatKind::Range(PatRange { - lo: ty::Const::from_bits(cx.tcx, lo, ty::ParamEnv::empty().and(ty)), - hi: ty::Const::from_bits(cx.tcx, hi, ty::ParamEnv::empty().and(ty)), - end, - }), - _ => PatKind::Wild, - } - } - } + let pats = self.0.drain((len - arity) as usize..).rev(); + ctor.apply(cx, ty, pats) }; - self.0.push(Pat { - ty, - span: DUMMY_SP, - kind: Box::new(pat), - }); - - self + applied_pats + .into_iter() + .map(|pat| { + let mut w = self.clone(); + w.0.push(pat); + w + }) + .collect() } } /// This determines the set of all possible constructors of a pattern matching -/// values of type `left_ty`. For vectors, this would normally be an infinite set -/// but is instead bounded by the maximum fixed length of slice patterns in -/// the column of patterns being analyzed. +/// values of type `ty`. We possibly return meta-constructors like integer ranges +/// that capture several base constructors at once. /// /// We make sure to omit constructors that are statically impossible. E.g., for /// `Option`, we do not include `Some(_)` in the returned list of constructors. fn all_constructors<'a, 'tcx>( - cx: &mut MatchCheckCtxt<'a, 'tcx>, - pcx: PatCtxt<'tcx>, + cx: &MatchCheckCtxt<'a, 'tcx>, + ty: Ty<'tcx>, ) -> Vec> { - debug!("all_constructors({:?})", pcx.ty); - let ctors = match pcx.ty.kind { + debug!("all_constructors({:?})", ty); + let ctors = match ty.kind { ty::Bool => { - [true, false].iter().map(|&b| { - ConstantValue(ty::Const::from_bool(cx.tcx, b)) - }).collect() + [true, false].iter().map(|&b| ConstantValue(ty::Const::from_bool(cx.tcx, b))).collect() } ty::Array(ref sub_ty, len) if len.try_eval_usize(cx.tcx, cx.param_env).is_some() => { let len = len.eval_usize(cx.tcx, cx.param_env); - if len != 0 && cx.is_uninhabited(sub_ty) { - vec![] - } else { - vec![Slice(len)] - } + if len != 0 && cx.is_uninhabited(sub_ty) { vec![] } else { vec![FixedLenSlice(len)] } } // Treat arrays of a constant but unknown length like slices. - ty::Array(ref sub_ty, _) | - ty::Slice(ref sub_ty) => { + ty::Array(ref sub_ty, _) | ty::Slice(ref sub_ty) => { if cx.is_uninhabited(sub_ty) { - vec![Slice(0)] + vec![FixedLenSlice(0)] } else { - (0..pcx.max_slice_length+1).map(|length| Slice(length)).collect() + vec![VarLenSlice(0, 0)] } } - ty::Adt(def, substs) if def.is_enum() => { - def.variants.iter() - .filter(|v| { - !cx.tcx.features().exhaustive_patterns || - !v.uninhabited_from(cx.tcx, substs, def.adt_kind()).contains(cx.tcx, cx.module) - }) - .map(|v| Variant(v.def_id)) - .collect() - } + ty::Adt(def, substs) if def.is_enum() => def + .variants + .iter() + .filter(|v| { + !cx.tcx.features().exhaustive_patterns + || !v + .uninhabited_from(cx.tcx, substs, def.adt_kind()) + .contains(cx.tcx, cx.module) + }) + .map(|v| Variant(v.def_id)) + .collect(), ty::Char => { + let to_const = |x| x; vec![ // The valid Unicode Scalar Value ranges. - ConstantRange('\u{0000}' as u128, - '\u{D7FF}' as u128, - cx.tcx.types.char, - RangeEnd::Included + IntRange( + IntRange::from_range( + cx.tcx, + cx.tcx.types.char, + to_const('\u{0000}' as u128), + to_const('\u{D7FF}' as u128), + RangeEnd::Included, + ) + .unwrap(), ), - ConstantRange('\u{E000}' as u128, - '\u{10FFFF}' as u128, - cx.tcx.types.char, - RangeEnd::Included + IntRange( + IntRange::from_range( + cx.tcx, + cx.tcx.types.char, + to_const('\u{E000}' as u128), + to_const('\u{10FFFF}' as u128), + RangeEnd::Included, + ) + .unwrap(), ), ] } ty::Int(ity) => { + let to_const = |x| x; let bits = Integer::from_attr(&cx.tcx, SignedInt(ity)).size().bits() as u128; let min = 1u128 << (bits - 1); let max = min - 1; - vec![ConstantRange(min, max, pcx.ty, RangeEnd::Included)] + vec![IntRange( + IntRange::from_range(cx.tcx, ty, to_const(min), to_const(max), RangeEnd::Included) + .unwrap(), + )] } ty::Uint(uty) => { + let to_const = |x| x; let size = Integer::from_attr(&cx.tcx, UnsignedInt(uty)).size(); let max = truncate(u128::max_value(), size); - vec![ConstantRange(0, max, pcx.ty, RangeEnd::Included)] + vec![IntRange( + IntRange::from_range(cx.tcx, ty, to_const(0), to_const(max), RangeEnd::Included) + .unwrap(), + )] } _ => { - if cx.is_uninhabited(pcx.ty) { + if cx.is_uninhabited(ty) { vec![] } else { vec![Single] } } }; - ctors -} - -fn max_slice_length<'p, 'a, 'tcx, I>(cx: &mut MatchCheckCtxt<'a, 'tcx>, patterns: I) -> u64 -where - I: Iterator>, - 'tcx: 'p, -{ - // The exhaustiveness-checking paper does not include any details on - // checking variable-length slice patterns. However, they are matched - // by an infinite collection of fixed-length array patterns. - // - // Checking the infinite set directly would take an infinite amount - // of time. However, it turns out that for each finite set of - // patterns `P`, all sufficiently large array lengths are equivalent: - // - // Each slice `s` with a "sufficiently-large" length `l ≥ L` that applies - // to exactly the subset `Pₜ` of `P` can be transformed to a slice - // `sₘ` for each sufficiently-large length `m` that applies to exactly - // the same subset of `P`. - // - // Because of that, each witness for reachability-checking from one - // of the sufficiently-large lengths can be transformed to an - // equally-valid witness from any other length, so we only have - // to check slice lengths from the "minimal sufficiently-large length" - // and below. - // - // Note that the fact that there is a *single* `sₘ` for each `m` - // not depending on the specific pattern in `P` is important: if - // you look at the pair of patterns - // `[true, ..]` - // `[.., false]` - // Then any slice of length ≥1 that matches one of these two - // patterns can be trivially turned to a slice of any - // other length ≥1 that matches them and vice-versa - for - // but the slice from length 2 `[false, true]` that matches neither - // of these patterns can't be turned to a slice from length 1 that - // matches neither of these patterns, so we have to consider - // slices from length 2 there. - // - // Now, to see that that length exists and find it, observe that slice - // patterns are either "fixed-length" patterns (`[_, _, _]`) or - // "variable-length" patterns (`[_, .., _]`). - // - // For fixed-length patterns, all slices with lengths *longer* than - // the pattern's length have the same outcome (of not matching), so - // as long as `L` is greater than the pattern's length we can pick - // any `sₘ` from that length and get the same result. - // - // For variable-length patterns, the situation is more complicated, - // because as seen above the precise value of `sₘ` matters. - // - // However, for each variable-length pattern `p` with a prefix of length - // `plₚ` and suffix of length `slₚ`, only the first `plₚ` and the last - // `slₚ` elements are examined. - // - // Therefore, as long as `L` is positive (to avoid concerns about empty - // types), all elements after the maximum prefix length and before - // the maximum suffix length are not examined by any variable-length - // pattern, and therefore can be added/removed without affecting - // them - creating equivalent patterns from any sufficiently-large - // length. - // - // Of course, if fixed-length patterns exist, we must be sure - // that our length is large enough to miss them all, so - // we can pick `L = max(FIXED_LEN+1 ∪ {max(PREFIX_LEN) + max(SUFFIX_LEN)})` - // - // for example, with the above pair of patterns, all elements - // but the first and last can be added/removed, so any - // witness of length ≥2 (say, `[false, false, true]`) can be - // turned to a witness from any other length ≥2. - - let mut max_prefix_len = 0; - let mut max_suffix_len = 0; - let mut max_fixed_len = 0; - - for row in patterns { - match *row.kind { - PatKind::Constant { value } => { - // extract the length of an array/slice from a constant - match (value.val, &value.ty.kind) { - (_, ty::Array(_, n)) => max_fixed_len = cmp::max( - max_fixed_len, - n.eval_usize(cx.tcx, cx.param_env), - ), - (ConstValue::Slice{ start, end, .. }, ty::Slice(_)) => max_fixed_len = cmp::max( - max_fixed_len, - (end - start) as u64, - ), - _ => {}, - } - } - PatKind::Slice { ref prefix, slice: None, ref suffix } => { - let fixed_len = prefix.len() as u64 + suffix.len() as u64; - max_fixed_len = cmp::max(max_fixed_len, fixed_len); - } - PatKind::Slice { ref prefix, slice: Some(_), ref suffix } => { - max_prefix_len = cmp::max(max_prefix_len, prefix.len() as u64); - max_suffix_len = cmp::max(max_suffix_len, suffix.len() as u64); - } - _ => {} - } - } - - cmp::max(max_fixed_len + 1, max_prefix_len + max_suffix_len) + ctors } /// An inclusive interval, used for precise integer exhaustiveness checking. @@ -827,13 +1507,17 @@ where /// /// `IntRange` is never used to encode an empty range or a "range" that wraps /// around the (offset) space: i.e., `range.lo <= range.hi`. -#[derive(Clone)] +#[derive(Debug, Clone, PartialEq)] struct IntRange<'tcx> { pub range: RangeInclusive, pub ty: Ty<'tcx>, } impl<'tcx> IntRange<'tcx> { + fn new(ty: Ty<'tcx>, range: RangeInclusive) -> Self { + IntRange { ty, range } + } + #[inline] fn is_integral(ty: Ty<'_>) -> bool { match ty.kind { @@ -842,6 +1526,13 @@ impl<'tcx> IntRange<'tcx> { } } + fn should_treat_range_exhaustively(tcx: TyCtxt<'tcx>, ty: Ty<'tcx>) -> bool { + // Don't treat `usize`/`isize` exhaustively unless the `precise_pointer_size_matching` + // feature is enabled. + IntRange::is_integral(ty) + && (!ty.is_ptr_sized_integral() || tcx.features().precise_pointer_size_matching) + } + #[inline] fn integral_size_and_signed_bias(tcx: TyCtxt<'tcx>, ty: Ty<'_>) -> Option<(Size, u128)> { match ty.kind { @@ -874,7 +1565,7 @@ impl<'tcx> IntRange<'tcx> { // This is a more general form of the previous branch. val } else { - return None + return None; }; let val = val ^ bias; Some(IntRange { range: val..=val, ty }) @@ -883,13 +1574,27 @@ impl<'tcx> IntRange<'tcx> { } } + #[inline] + fn from_const_range( + tcx: TyCtxt<'tcx>, + param_env: ty::ParamEnv<'tcx>, + lo: &Const<'tcx>, + hi: &Const<'tcx>, + end: &RangeEnd, + ) -> Option> { + let ty = lo.ty; + let lo = lo.eval_bits(tcx, param_env, lo.ty); + let hi = hi.eval_bits(tcx, param_env, hi.ty); + Self::from_range(tcx, ty, lo, hi, *end) + } + #[inline] fn from_range( tcx: TyCtxt<'tcx>, + ty: Ty<'tcx>, lo: u128, hi: u128, - ty: Ty<'tcx>, - end: &RangeEnd, + end: RangeEnd, ) -> Option> { if Self::is_integral(ty) { // Perform a shift if the underlying types are signed, @@ -897,10 +1602,10 @@ impl<'tcx> IntRange<'tcx> { let bias = IntRange::signed_bias(tcx, ty); let (lo, hi) = (lo ^ bias, hi ^ bias); // Make sure the interval is well-formed. - if lo > hi || lo == hi && *end == RangeEnd::Excluded { + if lo > hi || lo == hi && end == RangeEnd::Excluded { None } else { - let offset = (*end == RangeEnd::Excluded) as u128; + let offset = (end == RangeEnd::Excluded) as u128; Some(IntRange { range: lo..=(hi - offset), ty }) } } else { @@ -908,47 +1613,13 @@ impl<'tcx> IntRange<'tcx> { } } - fn from_ctor( - tcx: TyCtxt<'tcx>, - param_env: ty::ParamEnv<'tcx>, - ctor: &Constructor<'tcx>, - ) -> Option> { - // Floating-point ranges are permitted and we don't want - // to consider them when constructing integer ranges. + fn from_ctor(ctor: &Constructor<'tcx>) -> Option> { match ctor { - ConstantRange(lo, hi, ty, end) => Self::from_range(tcx, *lo, *hi, ty, end), - ConstantValue(val) => Self::from_const(tcx, param_env, val), + IntRange(range) => Some(range.clone()), _ => None, } } - fn from_pat( - tcx: TyCtxt<'tcx>, - param_env: ty::ParamEnv<'tcx>, - mut pat: &Pat<'tcx>, - ) -> Option> { - loop { - match pat.kind { - box PatKind::Constant { value } => { - return Self::from_const(tcx, param_env, value); - } - box PatKind::Range(PatRange { lo, hi, end }) => { - return Self::from_range( - tcx, - lo.eval_bits(tcx, param_env, lo.ty), - hi.eval_bits(tcx, param_env, hi.ty), - &lo.ty, - &end, - ); - } - box PatKind::AscribeUserType { ref subpattern, .. } => { - pat = subpattern; - }, - _ => return None, - } - } - } - // The return value of `signed_bias` should be XORed with an endpoint to encode/decode it. fn signed_bias(tcx: TyCtxt<'tcx>, ty: Ty<'tcx>) -> u128 { match ty.kind { @@ -956,177 +1627,150 @@ impl<'tcx> IntRange<'tcx> { let bits = Integer::from_attr(&tcx, SignedInt(ity)).size().bits() as u128; 1u128 << (bits - 1) } - _ => 0 + _ => 0, } } - /// Converts a `RangeInclusive` to a `ConstantValue` or inclusive `ConstantRange`. - fn range_to_ctor( - tcx: TyCtxt<'tcx>, - ty: Ty<'tcx>, - r: RangeInclusive, - ) -> Constructor<'tcx> { - let bias = IntRange::signed_bias(tcx, ty); - let (lo, hi) = r.into_inner(); + /// Converts an `IntRange` to a `PatKind::Constant` or inclusive `PatKind::Range`. + fn to_patkind(&self, tcx: TyCtxt<'tcx>) -> PatKind<'tcx> { + let bias = IntRange::signed_bias(tcx, self.ty); + let (lo, hi) = self.range.clone().into_inner(); if lo == hi { - let ty = ty::ParamEnv::empty().and(ty); - ConstantValue(ty::Const::from_bits(tcx, lo ^ bias, ty)) + let ty = ty::ParamEnv::empty().and(self.ty); + PatKind::Constant { value: ty::Const::from_bits(tcx, lo ^ bias, ty) } } else { - ConstantRange(lo ^ bias, hi ^ bias, ty, RangeEnd::Included) + let param_env = ty::ParamEnv::empty().and(self.ty); + let to_const = |x| ty::Const::from_bits(tcx, x, param_env); + PatKind::Range(PatRange { + lo: to_const(lo ^ bias), + hi: to_const(hi ^ bias), + end: RangeEnd::Included, + }) } } - /// Returns a collection of ranges that spans the values covered by `ranges`, subtracted - /// by the values covered by `self`: i.e., `ranges \ self` (in set notation). - fn subtract_from( - self, - tcx: TyCtxt<'tcx>, - param_env: ty::ParamEnv<'tcx>, - ranges: Vec>, - ) -> Vec> { - let ranges = ranges.into_iter().filter_map(|r| { - IntRange::from_ctor(tcx, param_env, &r).map(|i| i.range) - }); - let mut remaining_ranges = vec![]; + /// Returns a collection of ranges that spans the values covered by `ctor`, subtracted + /// by the values covered by `self`: i.e., `ctor \ self` (in set notation). + fn subtract_from(&self, other: Self) -> SmallVec<[Self; 2]> { + let range = other.range; + let ty = self.ty; - let (lo, hi) = self.range.into_inner(); - for subrange in ranges { - let (subrange_lo, subrange_hi) = subrange.into_inner(); - if lo > subrange_hi || subrange_lo > hi { - // The pattern doesn't intersect with the subrange at all, - // so the subrange remains untouched. - remaining_ranges.push(Self::range_to_ctor(tcx, ty, subrange_lo..=subrange_hi)); - } else { - if lo > subrange_lo { - // The pattern intersects an upper section of the - // subrange, so a lower section will remain. - remaining_ranges.push(Self::range_to_ctor(tcx, ty, subrange_lo..=(lo - 1))); - } - if hi < subrange_hi { - // The pattern intersects a lower section of the - // subrange, so an upper section will remain. - remaining_ranges.push(Self::range_to_ctor(tcx, ty, (hi + 1)..=subrange_hi)); - } + let (lo, hi) = (*self.range.start(), *self.range.end()); + let (range_lo, range_hi) = range.into_inner(); + let mut remaining_ranges = smallvec![]; + if lo > range_hi || range_lo > hi { + // The pattern doesn't intersect with the range at all, + // so the range remains untouched. + remaining_ranges.push(Self::new(ty, range_lo..=range_hi)); + } else { + if lo > range_lo { + // The pattern intersects an upper section of the + // range, so a lower section will remain. + remaining_ranges.push(Self::new(ty, range_lo..=(lo - 1))); + } + if hi < range_hi { + // The pattern intersects a lower section of the + // range, so an upper section will remain. + remaining_ranges.push(Self::new(ty, (hi + 1)..=range_hi)); } } remaining_ranges } - fn intersection(&self, other: &Self) -> Option { + fn intersection(&self, tcx: TyCtxt<'tcx>, other: &Self) -> Option { let ty = self.ty; let (lo, hi) = (*self.range.start(), *self.range.end()); let (other_lo, other_hi) = (*other.range.start(), *other.range.end()); - if lo <= other_hi && other_lo <= hi { - Some(IntRange { range: max(lo, other_lo)..=min(hi, other_hi), ty }) + if Self::should_treat_range_exhaustively(tcx, ty) { + if lo <= other_hi && other_lo <= hi { + Some(IntRange { range: max(lo, other_lo)..=min(hi, other_hi), ty }) + } else { + None + } } else { - None + // If the range sould not be treated exhaustively, fallback to checking for inclusion. + if other_lo <= lo && hi <= other_hi { Some(self.clone()) } else { None } } } } -// A request for missing constructor data in terms of either: -// - whether or not there any missing constructors; or -// - the actual set of missing constructors. -#[derive(PartialEq)] -enum MissingCtorsInfo { - Emptiness, - Ctors, +// A struct to compute a set of constructors equivalent to `all_ctors \ used_ctors`. +#[derive(Clone)] +struct MissingConstructors<'tcx> { + param_env: ty::ParamEnv<'tcx>, + tcx: TyCtxt<'tcx>, + all_ctors: Vec>, + used_ctors: Vec>, } -// Used by `compute_missing_ctors`. -#[derive(Debug, PartialEq)] -enum MissingCtors<'tcx> { - Empty, - NonEmpty, +impl<'tcx> MissingConstructors<'tcx> { + fn new( + tcx: TyCtxt<'tcx>, + param_env: ty::ParamEnv<'tcx>, + all_ctors: Vec>, + used_ctors: Vec>, + ) -> Self { + MissingConstructors { tcx, param_env, all_ctors, used_ctors } + } - // Note that the Vec can be empty. - Ctors(Vec>), -} + fn into_inner(self) -> (Vec>, Vec>) { + (self.all_ctors, self.used_ctors) + } -// When `info` is `MissingCtorsInfo::Ctors`, compute a set of constructors -// equivalent to `all_ctors \ used_ctors`. When `info` is -// `MissingCtorsInfo::Emptiness`, just determines if that set is empty or not. -// (The split logic gives a performance win, because we always need to know if -// the set is empty, but we rarely need the full set, and it can be expensive -// to compute the full set.) -fn compute_missing_ctors<'tcx>( - info: MissingCtorsInfo, - tcx: TyCtxt<'tcx>, - param_env: ty::ParamEnv<'tcx>, - all_ctors: &Vec>, - used_ctors: &Vec>, -) -> MissingCtors<'tcx> { - let mut missing_ctors = vec![]; - - for req_ctor in all_ctors { - let mut refined_ctors = vec![req_ctor.clone()]; - for used_ctor in used_ctors { - if used_ctor == req_ctor { - // If a constructor appears in a `match` arm, we can - // eliminate it straight away. - refined_ctors = vec![] - } else if let Some(interval) = IntRange::from_ctor(tcx, param_env, used_ctor) { - // Refine the required constructors for the type by subtracting - // the range defined by the current constructor pattern. - refined_ctors = interval.subtract_from(tcx, param_env, refined_ctors); - } + fn is_empty(&self) -> bool { + self.iter().next().is_none() + } - // If the constructor patterns that have been considered so far - // already cover the entire range of values, then we the - // constructor is not missing, and we can move on to the next one. - if refined_ctors.is_empty() { - break; - } - } - // If a constructor has not been matched, then it is missing. - // We add `refined_ctors` instead of `req_ctor`, because then we can - // provide more detailed error information about precisely which - // ranges have been omitted. - if info == MissingCtorsInfo::Emptiness { - if !refined_ctors.is_empty() { - // The set is non-empty; return early. - return MissingCtors::NonEmpty; - } - } else { - missing_ctors.extend(refined_ctors); - } + /// Iterate over all_ctors \ used_ctors + fn iter<'a>(&'a self) -> impl Iterator> + Captures<'a> { + self.all_ctors.iter().flat_map(move |req_ctor| { + req_ctor.clone().subtract_meta_constructor(self.tcx, self.param_env, &self.used_ctors) + }) + } +} + +impl<'tcx> fmt::Debug for MissingConstructors<'tcx> { + fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { + let ctors: Vec<_> = self.iter().collect(); + f.debug_tuple("MissingConstructors").field(&ctors).finish() } +} - if info == MissingCtorsInfo::Emptiness { - // If we reached here, the set is empty. - MissingCtors::Empty - } else { - MissingCtors::Ctors(missing_ctors) +/// This is needed for the `PartialEq` impl of `Constructor`. +/// Comparing a `Constructor::MissingConstructor` with something else +/// should however never happen, so this implementaiton panics. +impl<'tcx> PartialEq for MissingConstructors<'tcx> { + fn eq(&self, _other: &Self) -> bool { + bug!("tried to compare MissingConstructors for equality") } } -/// Algorithm from http://moscova.inria.fr/~maranget/papers/warn/index.html. -/// The algorithm from the paper has been modified to correctly handle empty -/// types. The changes are: +/// Main entrypoint of the algorithm described at th top of the file. +/// Note that to correctly handle empty types: /// (0) We don't exit early if the pattern matrix has zero rows. We just /// continue to recurse over columns. /// (1) all_constructors will only return constructors that are statically /// possible. E.g., it will only return `Ok` for `Result`. /// -/// This finds whether a (row) vector `v` of patterns is 'useful' in relation -/// to a set of such vectors `m` - this is defined as there being a set of -/// inputs that will match `v` but not any of the sets in `m`. +/// This finds whether a pattern-stack `v` is 'useful' in relation to a set of such pattern-stacks +/// (aka 'matrix') `m` - this is defined as there being a set of inputs that will match `v` but not +/// any of the rows in `m`. /// /// All the patterns at each column of the `matrix ++ v` matrix must /// have the same type, except that wildcard (PatKind::Wild) patterns /// with type `TyErr` are also allowed, even if the "type of the column" /// is not `TyErr`. That is used to represent private fields, as using their -/// real type would assert that they are inhabited. +/// real type might leak that they are inhabited. /// /// This is used both for reachability checking (if a pattern isn't useful in /// relation to preceding patterns, it is not reachable) and exhaustiveness /// checking (if a wildcard pattern is useful in relation to a matrix, the /// matrix isn't exhaustive). pub fn is_useful<'p, 'a, 'tcx>( - cx: &mut MatchCheckCtxt<'a, 'tcx>, + cx: &MatchCheckCtxt<'a, 'tcx>, matrix: &Matrix<'p, 'tcx>, - v: &[&Pat<'tcx>], - witness: WitnessPreference, + v: &PatStack<'_, 'tcx>, + witness_preference: WitnessPreference, ) -> Usefulness<'tcx> { let &Matrix(ref rows) = matrix; debug!("is_useful({:#?}, {:#?})", matrix, v); @@ -1138,282 +1782,123 @@ pub fn is_useful<'p, 'a, 'tcx>( // the type of the tuple we're checking is inhabited or not. if v.is_empty() { return if rows.is_empty() { - match witness { - ConstructWitness => UsefulWithWitness(vec![Witness(vec![])]), - LeaveOutWitness => Useful, - } + Usefulness::new_useful(witness_preference) } else { NotUseful - } + }; }; assert!(rows.iter().all(|r| r.len() == v.len())); - let pcx = PatCtxt { - // TyErr is used to represent the type of wildcard patterns matching - // against inaccessible (private) fields of structs, so that we won't - // be able to observe whether the types of the struct's fields are - // inhabited. - // - // If the field is truly inaccessible, then all the patterns - // matching against it must be wildcard patterns, so its type - // does not matter. - // - // However, if we are matching against non-wildcard patterns, we - // need to know the real type of the field so we can specialize - // against it. This primarily occurs through constants - they - // can include contents for fields that are inaccessible at the - // location of the match. In that case, the field's type is - // inhabited - by the constant - so we can just use it. - // - // FIXME: this might lead to "unstable" behavior with macro hygiene - // introducing uninhabited patterns for inaccessible fields. We - // need to figure out how to model that. - ty: rows.iter().map(|r| r[0].ty).find(|ty| !ty.references_error()).unwrap_or(v[0].ty), - max_slice_length: max_slice_length(cx, rows.iter().map(|r| r[0]).chain(Some(v[0]))) - }; + // TyErr is used to represent the type of wildcard patterns matching + // against inaccessible (private) fields of structs, so that we won't + // be able to observe whether the types of the struct's fields are + // inhabited. + // + // If the field is truly inaccessible, then all the patterns + // matching against it must be wildcard patterns, so its type + // does not matter. + // + // However, if we are matching against non-wildcard patterns, we + // need to know the real type of the field so we can specialize + // against it. This primarily occurs through constants - they + // can include contents for fields that are inaccessible at the + // location of the match. In that case, the field's type is + // inhabited - by the constant - so we can just use it. + // + // FIXME: this might lead to "unstable" behavior with macro hygiene + // introducing uninhabited patterns for inaccessible fields. We + // need to figure out how to model that. + let ty = matrix.heads().map(|p| p.ty).find(|ty| !ty.references_error()).unwrap_or(v.head().ty); - debug!("is_useful_expand_first_col: pcx={:#?}, expanding {:#?}", pcx, v[0]); + debug!("is_useful_expand_first_col: ty={:#?}, expanding {:#?}", ty, v.head()); - if let Some(constructors) = pat_constructors(cx, v[0], pcx) { - let is_declared_nonexhaustive = cx.is_non_exhaustive_variant(v[0]) && !cx.is_local(pcx.ty); - debug!("is_useful - expanding constructors: {:#?}, is_declared_nonexhaustive: {:?}", - constructors, is_declared_nonexhaustive); + let v_constructor = v.head_ctors(); - if is_declared_nonexhaustive { - Useful - } else { - split_grouped_constructors( - cx.tcx, cx.param_env, constructors, matrix, pcx.ty, - ).into_iter().map(|c| - is_useful_specialized(cx, matrix, v, c, pcx.ty, witness) - ).find(|result| result.is_useful()).unwrap_or(NotUseful) - } - } else { - debug!("is_useful - expanding wildcard"); - - let used_ctors: Vec> = rows.iter().flat_map(|row| { - pat_constructors(cx, row[0], pcx).unwrap_or(vec![]) - }).collect(); - debug!("used_ctors = {:#?}", used_ctors); - // `all_ctors` are all the constructors for the given type, which - // should all be represented (or caught with the wild pattern `_`). - let all_ctors = all_constructors(cx, pcx); - debug!("all_ctors = {:#?}", all_ctors); - - // `missing_ctors` is the set of constructors from the same type as the - // first column of `matrix` that are matched only by wildcard patterns - // from the first column. - // - // Therefore, if there is some pattern that is unmatched by `matrix`, - // it will still be unmatched if the first constructor is replaced by - // any of the constructors in `missing_ctors` - // - // However, if our scrutinee is *privately* an empty enum, we - // must treat it as though it had an "unknown" constructor (in - // that case, all other patterns obviously can't be variants) - // to avoid exposing its emptyness. See the `match_privately_empty` - // test for details. - // - // FIXME: currently the only way I know of something can - // be a privately-empty enum is when the exhaustive_patterns - // feature flag is not present, so this is only - // needed for that case. - - // Missing constructors are those that are not matched by any - // non-wildcard patterns in the current column. We always determine if - // the set is empty, but we only fully construct them on-demand, - // because they're rarely used and can be big. - let cheap_missing_ctors = compute_missing_ctors( - MissingCtorsInfo::Emptiness, cx.tcx, cx.param_env, &all_ctors, &used_ctors, - ); - - let is_privately_empty = all_ctors.is_empty() && !cx.is_uninhabited(pcx.ty); - let is_declared_nonexhaustive = cx.is_non_exhaustive_enum(pcx.ty) && !cx.is_local(pcx.ty); - debug!("cheap_missing_ctors={:#?} is_privately_empty={:#?} is_declared_nonexhaustive={:#?}", - cheap_missing_ctors, is_privately_empty, is_declared_nonexhaustive); - - // For privately empty and non-exhaustive enums, we work as if there were an "extra" - // `_` constructor for the type, so we can never match over all constructors. - let is_non_exhaustive = is_privately_empty || is_declared_nonexhaustive || - (pcx.ty.is_ptr_sized_integral() && !cx.tcx.features().precise_pointer_size_matching); - - if cheap_missing_ctors == MissingCtors::Empty && !is_non_exhaustive { - split_grouped_constructors(cx.tcx, cx.param_env, all_ctors, matrix, pcx.ty) - .into_iter().map(|c| is_useful_specialized(cx, matrix, v, c, pcx.ty, witness)) - .find(|result| result.is_useful()) - .unwrap_or(NotUseful) - } else { - let matrix = rows.iter().filter_map(|r| { - if r[0].is_wildcard() { - Some(SmallVec::from_slice(&r[1..])) - } else { - None - } - }).collect(); - match is_useful(cx, &matrix, &v[1..], witness) { - UsefulWithWitness(pats) => { - let cx = &*cx; - // In this case, there's at least one "free" - // constructor that is only matched against by - // wildcard patterns. - // - // There are 2 ways we can report a witness here. - // Commonly, we can report all the "free" - // constructors as witnesses, e.g., if we have: - // - // ``` - // enum Direction { N, S, E, W } - // let Direction::N = ...; - // ``` - // - // we can report 3 witnesses: `S`, `E`, and `W`. - // - // However, there are 2 cases where we don't want - // to do this and instead report a single `_` witness: - // - // 1) If the user is matching against a non-exhaustive - // enum, there is no point in enumerating all possible - // variants, because the user can't actually match - // against them himself, e.g., in an example like: - // ``` - // let err: io::ErrorKind = ...; - // match err { - // io::ErrorKind::NotFound => {}, - // } - // ``` - // we don't want to show every possible IO error, - // but instead have `_` as the witness (this is - // actually *required* if the user specified *all* - // IO errors, but is probably what we want in every - // case). - // - // 2) If the user didn't actually specify a constructor - // in this arm, e.g., in - // ``` - // let x: (Direction, Direction, bool) = ...; - // let (_, _, false) = x; - // ``` - // we don't want to show all 16 possible witnesses - // `(, , true)` - we are - // satisfied with `(_, _, true)`. In this case, - // `used_ctors` is empty. - let new_witnesses = if is_non_exhaustive || used_ctors.is_empty() { - // All constructors are unused. Add wild patterns - // rather than each individual constructor. - pats.into_iter().map(|mut witness| { - witness.0.push(Pat { - ty: pcx.ty, - span: DUMMY_SP, - kind: box PatKind::Wild, - }); - witness - }).collect() - } else { - let expensive_missing_ctors = compute_missing_ctors( - MissingCtorsInfo::Ctors, cx.tcx, cx.param_env, &all_ctors, &used_ctors, - ); - if let MissingCtors::Ctors(missing_ctors) = expensive_missing_ctors { - pats.into_iter().flat_map(|witness| { - missing_ctors.iter().map(move |ctor| { - // Extends the witness with a "wild" version of this - // constructor, that matches everything that can be built with - // it. For example, if `ctor` is a `Constructor::Variant` for - // `Option::Some`, this pushes the witness for `Some(_)`. - witness.clone().push_wild_constructor(cx, ctor, pcx.ty) - }) - }).collect() - } else { - bug!("cheap missing ctors") - } - }; - UsefulWithWitness(new_witnesses) - } - result => result - } - } + if cx.is_non_exhaustive_variant(v.head()) && !cx.is_local(ty) && !v_constructor.is_wildcard() { + debug!("is_useful - shortcut because declared non-exhaustive"); + // FIXME(#65157) + return Useful; } + + let matrix_head_ctors = matrix.head_ctors(); + debug!("matrix_head_ctors = {:#?}", matrix_head_ctors); + + v_constructor + .split_meta_constructor(cx, ty, &matrix_head_ctors) + .into_iter() + .map(|c| is_useful_specialized(cx, matrix, v, c, ty, witness_preference)) + .find(|result| result.is_useful()) + .unwrap_or(NotUseful) } -/// A shorthand for the `U(S(c, P), S(c, q))` operation from the paper. I.e., `is_useful` applied -/// to the specialised version of both the pattern matrix `P` and the new pattern `q`. +/// A shorthand for the `U(S(c, M), S(c, q))` operation. I.e., `is_useful` applied +/// to the specialised version of both the pattern matrix `M` and the new pattern `q`. fn is_useful_specialized<'p, 'a, 'tcx>( - cx: &mut MatchCheckCtxt<'a, 'tcx>, - &Matrix(ref m): &Matrix<'p, 'tcx>, - v: &[&Pat<'tcx>], + cx: &MatchCheckCtxt<'a, 'tcx>, + matrix: &Matrix<'p, 'tcx>, + v: &PatStack<'_, 'tcx>, ctor: Constructor<'tcx>, - lty: Ty<'tcx>, - witness: WitnessPreference, + ty: Ty<'tcx>, + witness_preference: WitnessPreference, ) -> Usefulness<'tcx> { - debug!("is_useful_specialized({:#?}, {:#?}, {:?})", v, ctor, lty); - let sub_pat_tys = constructor_sub_pattern_tys(cx, &ctor, lty); - let wild_patterns_owned: Vec<_> = sub_pat_tys.iter().map(|ty| { - Pat { - ty, - span: DUMMY_SP, - kind: box PatKind::Wild, - } - }).collect(); - let wild_patterns: Vec<_> = wild_patterns_owned.iter().collect(); - let matrix = Matrix( - m.iter() - .filter_map(|r| specialize(cx, &r, &ctor, &wild_patterns)) - .collect() - ); - match specialize(cx, v, &ctor, &wild_patterns) { - Some(v) => match is_useful(cx, &matrix, &v, witness) { - UsefulWithWitness(witnesses) => UsefulWithWitness( - witnesses.into_iter() - .map(|witness| witness.apply_constructor(cx, &ctor, lty)) - .collect() - ), - result => result - } - None => NotUseful - } + debug!("is_useful_specialized({:#?}, {:#?}, {:?})", v, ctor, ty); + + let ctor_wild_subpatterns_owned: Vec<_> = ctor.wildcard_subpatterns(cx, ty).collect(); + let ctor_wild_subpatterns: Vec<_> = ctor_wild_subpatterns_owned.iter().collect(); + let matrix = matrix.specialize(cx, &ctor, &ctor_wild_subpatterns); + let ret = v + .specialize(cx, &ctor, &ctor_wild_subpatterns) + .into_iter() + .map(|v| is_useful(cx, &matrix, &v, witness_preference)) + .map(|u| u.apply_constructor(cx, &ctor, ty)) + .find(|result| result.is_useful()) + .unwrap_or(NotUseful); + ret } -/// Determines the constructors that the given pattern can be specialized to. -/// -/// In most cases, there's only one constructor that a specific pattern -/// represents, such as a specific enum variant or a specific literal value. -/// Slice patterns, however, can match slices of different lengths. For instance, -/// `[a, b, ..tail]` can match a slice of length 2, 3, 4 and so on. -/// -/// Returns `None` in case of a catch-all, which can't be specialized. -fn pat_constructors<'tcx>(cx: &mut MatchCheckCtxt<'_, 'tcx>, - pat: &Pat<'tcx>, - pcx: PatCtxt<'tcx>) - -> Option>> -{ +/// Determines the constructors that are covered by the given pattern. +/// Except for or-patterns, this returns only one constructor (possibly a meta-constructor). +fn pat_constructors<'tcx>( + tcx: TyCtxt<'tcx>, + param_env: ty::ParamEnv<'tcx>, + pat: &Pat<'tcx>, +) -> Constructor<'tcx> { match *pat.kind { - PatKind::AscribeUserType { ref subpattern, .. } => - pat_constructors(cx, subpattern, pcx), - PatKind::Binding { .. } | PatKind::Wild => None, - PatKind::Leaf { .. } | PatKind::Deref { .. } => Some(vec![Single]), + PatKind::AscribeUserType { ref subpattern, .. } => { + pat_constructors(tcx, param_env, subpattern) + } + PatKind::Binding { .. } | PatKind::Wild => Wildcard, + PatKind::Leaf { .. } | PatKind::Deref { .. } => Single, PatKind::Variant { adt_def, variant_index, .. } => { - Some(vec![Variant(adt_def.variants[variant_index].def_id)]) + Variant(adt_def.variants[variant_index].def_id) + } + PatKind::Constant { value } => { + // FIXME: consts are not handled properly; see #65413 + if let Some(range) = IntRange::from_const(tcx, param_env, value) { + IntRange(range) + } else { + ConstantValue(value) + } + } + PatKind::Range(PatRange { lo, hi, end }) => { + if let Some(range) = IntRange::from_const_range(tcx, param_env, &lo, &hi, &end) { + IntRange(range) + } else { + ConstantRange(lo, hi, end) + } } - PatKind::Constant { value } => Some(vec![ConstantValue(value)]), - PatKind::Range(PatRange { lo, hi, end }) => - Some(vec![ConstantRange( - lo.eval_bits(cx.tcx, cx.param_env, lo.ty), - hi.eval_bits(cx.tcx, cx.param_env, hi.ty), - lo.ty, - end, - )]), - PatKind::Array { .. } => match pcx.ty.kind { - ty::Array(_, length) => Some(vec![ - Slice(length.eval_usize(cx.tcx, cx.param_env)) - ]), - _ => span_bug!(pat.span, "bad ty {:?} for array pattern", pcx.ty) + PatKind::Array { .. } => match pat.ty.kind { + ty::Array(_, length) => FixedLenSlice(length.eval_usize(tcx, param_env)), + _ => span_bug!(pat.span, "bad ty {:?} for array pattern", pat.ty), }, PatKind::Slice { ref prefix, ref slice, ref suffix } => { - let pat_len = prefix.len() as u64 + suffix.len() as u64; + let prefix = prefix.len() as u64; + let suffix = suffix.len() as u64; if slice.is_some() { - Some((pat_len..pcx.max_slice_length+1).map(Slice).collect()) + VarLenSlice(prefix, suffix) } else { - Some(vec![Slice(pat_len)]) + FixedLenSlice(prefix + suffix) } } PatKind::Or { .. } => { @@ -1422,81 +1907,7 @@ fn pat_constructors<'tcx>(cx: &mut MatchCheckCtxt<'_, 'tcx>, } } -/// This computes the arity of a constructor. The arity of a constructor -/// is how many subpattern patterns of that constructor should be expanded to. -/// -/// For instance, a tuple pattern `(_, 42, Some([]))` has the arity of 3. -/// A struct pattern's arity is the number of fields it contains, etc. -fn constructor_arity(cx: &MatchCheckCtxt<'a, 'tcx>, ctor: &Constructor<'tcx>, ty: Ty<'tcx>) -> u64 { - debug!("constructor_arity({:#?}, {:?})", ctor, ty); - match ty.kind { - ty::Tuple(ref fs) => fs.len() as u64, - ty::Slice(..) | ty::Array(..) => match *ctor { - Slice(length) => length, - ConstantValue(_) => 0, - _ => bug!("bad slice pattern {:?} {:?}", ctor, ty) - } - ty::Ref(..) => 1, - ty::Adt(adt, _) => { - adt.variants[ctor.variant_index_for_adt(cx, adt)].fields.len() as u64 - } - _ => 0 - } -} - -/// This computes the types of the sub patterns that a constructor should be -/// expanded to. -/// -/// For instance, a tuple pattern (43u32, 'a') has sub pattern types [u32, char]. -fn constructor_sub_pattern_tys<'a, 'tcx>( - cx: &MatchCheckCtxt<'a, 'tcx>, - ctor: &Constructor<'tcx>, - ty: Ty<'tcx>, -) -> Vec> { - debug!("constructor_sub_pattern_tys({:#?}, {:?})", ctor, ty); - match ty.kind { - ty::Tuple(ref fs) => fs.into_iter().map(|t| t.expect_ty()).collect(), - ty::Slice(ty) | ty::Array(ty, _) => match *ctor { - Slice(length) => (0..length).map(|_| ty).collect(), - ConstantValue(_) => vec![], - _ => bug!("bad slice pattern {:?} {:?}", ctor, ty) - } - ty::Ref(_, rty, _) => vec![rty], - ty::Adt(adt, substs) => { - if adt.is_box() { - // Use T as the sub pattern type of Box. - vec![substs.type_at(0)] - } else { - adt.variants[ctor.variant_index_for_adt(cx, adt)].fields.iter().map(|field| { - let is_visible = adt.is_enum() - || field.vis.is_accessible_from(cx.module, cx.tcx); - if is_visible { - let ty = field.ty(cx.tcx, substs); - match ty.kind { - // If the field type returned is an array of an unknown - // size return an TyErr. - ty::Array(_, len) - if len.try_eval_usize(cx.tcx, cx.param_env).is_none() => - cx.tcx.types.err, - _ => ty, - } - } else { - // Treat all non-visible fields as TyErr. They - // can't appear in any other pattern from - // this match (because they are private), - // so their type does not matter - but - // we don't want to know they are - // uninhabited. - cx.tcx.types.err - } - }).collect() - } - } - _ => vec![], - } -} - -// checks whether a constant is equal to a user-written slice pattern. Only supports byte slices, +// Checks whether a constant is equal to a user-written slice pattern. Only supports byte slices, // meaning all other types will compare unequal and thus equal patterns often do not cause the // second pattern to lint about unreachable match arms. fn slice_pat_covered_by_const<'tcx>( @@ -1514,17 +1925,20 @@ fn slice_pat_covered_by_const<'tcx>( let n = n.eval_usize(tcx, param_env); let ptr = Pointer::new(AllocId(0), offset); alloc.get_bytes(&tcx, ptr, Size::from_bytes(n)).unwrap() - }, + } (ConstValue::Slice { data, start, end }, ty::Slice(t)) => { assert_eq!(*t, tcx.types.u8); let ptr = Pointer::new(AllocId(0), Size::from_bytes(start as u64)); data.get_bytes(&tcx, ptr, Size::from_bytes((end - start) as u64)).unwrap() - }, + } // FIXME(oli-obk): create a way to extract fat pointers from ByRef (_, ty::Slice(_)) => return Ok(false), _ => bug!( "slice_pat_covered_by_const: {:#?}, {:#?}, {:#?}, {:#?}", - const_val, prefix, slice, suffix, + const_val, + prefix, + slice, + suffix, ), }; @@ -1533,9 +1947,10 @@ fn slice_pat_covered_by_const<'tcx>( return Ok(false); } - for (ch, pat) in - data[..prefix.len()].iter().zip(prefix).chain( - data[data.len()-suffix.len()..].iter().zip(suffix)) + for (ch, pat) in data[..prefix.len()] + .iter() + .zip(prefix) + .chain(data[data.len() - suffix.len()..].iter().zip(suffix)) { match pat.kind { box PatKind::Constant { value } => { @@ -1552,248 +1967,128 @@ fn slice_pat_covered_by_const<'tcx>( Ok(true) } -// Whether to evaluate a constructor using exhaustive integer matching. This is true if the -// constructor is a range or constant with an integer type. -fn should_treat_range_exhaustively(tcx: TyCtxt<'tcx>, ctor: &Constructor<'tcx>) -> bool { - let ty = match ctor { - ConstantValue(value) => value.ty, - ConstantRange(_, _, ty, _) => ty, - _ => return false, - }; - if let ty::Char | ty::Int(_) | ty::Uint(_) = ty.kind { - !ty.is_ptr_sized_integral() || tcx.features().precise_pointer_size_matching - } else { - false - } -} - -/// For exhaustive integer matching, some constructors are grouped within other constructors -/// (namely integer typed values are grouped within ranges). However, when specialising these -/// constructors, we want to be specialising for the underlying constructors (the integers), not -/// the groups (the ranges). Thus we need to split the groups up. Splitting them up naïvely would -/// mean creating a separate constructor for every single value in the range, which is clearly -/// impractical. However, observe that for some ranges of integers, the specialisation will be -/// identical across all values in that range (i.e., there are equivalence classes of ranges of -/// constructors based on their `is_useful_specialized` outcome). These classes are grouped by -/// the patterns that apply to them (in the matrix `P`). We can split the range whenever the -/// patterns that apply to that range (specifically: the patterns that *intersect* with that range) -/// change. -/// Our solution, therefore, is to split the range constructor into subranges at every single point -/// the group of intersecting patterns changes (using the method described below). -/// And voilà! We're testing precisely those ranges that we need to, without any exhaustive matching -/// on actual integers. The nice thing about this is that the number of subranges is linear in the -/// number of rows in the matrix (i.e., the number of cases in the `match` statement), so we don't -/// need to be worried about matching over gargantuan ranges. -/// -/// Essentially, given the first column of a matrix representing ranges, looking like the following: -/// -/// |------| |----------| |-------| || -/// |-------| |-------| |----| || -/// |---------| -/// -/// We split the ranges up into equivalence classes so the ranges are no longer overlapping: -/// -/// |--|--|||-||||--||---|||-------| |-|||| || -/// -/// The logic for determining how to split the ranges is fairly straightforward: we calculate -/// boundaries for each interval range, sort them, then create constructors for each new interval -/// between every pair of boundary points. (This essentially sums up to performing the intuitive -/// merging operation depicted above.) -fn split_grouped_constructors<'p, 'tcx>( - tcx: TyCtxt<'tcx>, - param_env: ty::ParamEnv<'tcx>, - ctors: Vec>, - &Matrix(ref m): &Matrix<'p, 'tcx>, - ty: Ty<'tcx>, -) -> Vec> { - let mut split_ctors = Vec::with_capacity(ctors.len()); - - for ctor in ctors.into_iter() { - match ctor { - // For now, only ranges may denote groups of "subconstructors", so we only need to - // special-case constant ranges. - ConstantRange(..) if should_treat_range_exhaustively(tcx, &ctor) => { - // We only care about finding all the subranges within the range of the constructor - // range. Anything else is irrelevant, because it is guaranteed to result in - // `NotUseful`, which is the default case anyway, and can be ignored. - let ctor_range = IntRange::from_ctor(tcx, param_env, &ctor).unwrap(); - - /// Represents a border between 2 integers. Because the intervals spanning borders - /// must be able to cover every integer, we need to be able to represent - /// 2^128 + 1 such borders. - #[derive(Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] - enum Border { - JustBefore(u128), - AfterMax, +/// Checks whether there exists any shared value in either `ctor` or `pat` by intersecting them. +// This has a single call site that can be hot +#[inline(always)] +fn constructor_intersects_pattern<'p, 'tcx>( + cx: &MatchCheckCtxt<'_, 'tcx>, + ctor: &Constructor<'tcx>, + pat: &'p Pat<'tcx>, +) -> Option> { + trace!("constructor_intersects_pattern {:#?}, {:#?}", ctor, pat); + match ctor { + Single => Some(PatStack::default()), + IntRange(ctor) => { + let pat = match *pat.kind { + PatKind::Constant { value } => IntRange::from_const(cx.tcx, cx.param_env, value)?, + PatKind::Range(PatRange { lo, hi, end }) => { + IntRange::from_const_range(cx.tcx, cx.param_env, lo, hi, &end)? } + _ => bug!("`constructor_intersects_pattern` called with {:?}", pat), + }; - // A function for extracting the borders of an integer interval. - fn range_borders(r: IntRange<'_>) -> impl Iterator { - let (lo, hi) = r.range.into_inner(); - let from = Border::JustBefore(lo); - let to = match hi.checked_add(1) { - Some(m) => Border::JustBefore(m), - None => Border::AfterMax, - }; - vec![from, to].into_iter() - } + ctor.intersection(cx.tcx, &pat)?; - // `borders` is the set of borders between equivalence classes: each equivalence - // class lies between 2 borders. - let row_borders = m.iter() - .flat_map(|row| IntRange::from_pat(tcx, param_env, row[0])) - .flat_map(|range| ctor_range.intersection(&range)) - .flat_map(|range| range_borders(range)); - let ctor_borders = range_borders(ctor_range.clone()); - let mut borders: Vec<_> = row_borders.chain(ctor_borders).collect(); - borders.sort_unstable(); + // Constructor splitting should ensure that all intersections we encounter are actually + // inclusions. + let (pat_lo, pat_hi) = pat.range.into_inner(); + let (ctor_lo, ctor_hi) = ctor.range.clone().into_inner(); + assert!(pat_lo <= ctor_lo && ctor_hi <= pat_hi); - // We're going to iterate through every pair of borders, making sure that each - // represents an interval of nonnegative length, and convert each such interval - // into a constructor. - for IntRange { range, .. } in borders.windows(2).filter_map(|window| { - match (window[0], window[1]) { - (Border::JustBefore(n), Border::JustBefore(m)) => { - if n < m { - Some(IntRange { range: n..=(m - 1), ty }) - } else { - None - } - } - (Border::JustBefore(n), Border::AfterMax) => { - Some(IntRange { range: n..=u128::MAX, ty }) - } - (Border::AfterMax, _) => None, - } - }) { - split_ctors.push(IntRange::range_to_ctor(tcx, ty, range)); - } - } - // Any other constructor can be used unchanged. - _ => split_ctors.push(ctor), + Some(PatStack::default()) } - } - - split_ctors -} - -fn constructor_covered_by_range<'tcx>( - tcx: TyCtxt<'tcx>, - param_env: ty::ParamEnv<'tcx>, - ctor: &Constructor<'tcx>, - pat: &Pat<'tcx>, -) -> Result { - let (from, to, end, ty) = match pat.kind { - box PatKind::Constant { value } => (value, value, RangeEnd::Included, value.ty), - box PatKind::Range(PatRange { lo, hi, end }) => (lo, hi, end, lo.ty), - _ => bug!("`constructor_covered_by_range` called with {:?}", pat), - }; - trace!("constructor_covered_by_range {:#?}, {:#?}, {:#?}, {}", ctor, from, to, ty); - let cmp_from = |c_from| compare_const_vals(tcx, c_from, from, param_env, ty) - .map(|res| res != Ordering::Less); - let cmp_to = |c_to| compare_const_vals(tcx, c_to, to, param_env, ty); - macro_rules! some_or_ok { - ($e:expr) => { - match $e { - Some(to) => to, - None => return Ok(false), // not char or int - } - }; - } - match *ctor { - ConstantValue(value) => { - let to = some_or_ok!(cmp_to(value)); - let end = (to == Ordering::Less) || - (end == RangeEnd::Included && to == Ordering::Equal); - Ok(some_or_ok!(cmp_from(value)) && end) - }, - ConstantRange(from, to, ty, RangeEnd::Included) => { - let to = some_or_ok!(cmp_to(ty::Const::from_bits( - tcx, - to, - ty::ParamEnv::empty().and(ty), - ))); - let end = (to == Ordering::Less) || - (end == RangeEnd::Included && to == Ordering::Equal); - Ok(some_or_ok!(cmp_from(ty::Const::from_bits( - tcx, - from, - ty::ParamEnv::empty().and(ty), - ))) && end) - }, - ConstantRange(from, to, ty, RangeEnd::Excluded) => { - let to = some_or_ok!(cmp_to(ty::Const::from_bits( - tcx, - to, - ty::ParamEnv::empty().and(ty) - ))); - let end = (to == Ordering::Less) || - (end == RangeEnd::Excluded && to == Ordering::Equal); - Ok(some_or_ok!(cmp_from(ty::Const::from_bits( - tcx, - from, - ty::ParamEnv::empty().and(ty))) - ) && end) + ConstantValue(..) | ConstantRange(..) => { + // Fallback for non-ranges and ranges that involve floating-point numbers, which are + // not conveniently handled by `IntRange`. For these cases, the constructor may not be + // a range so intersection actually devolves into being covered by the pattern. + let (pat_from, pat_to, pat_end) = match *pat.kind { + PatKind::Constant { value } => (value, value, RangeEnd::Included), + PatKind::Range(PatRange { lo, hi, end }) => (lo, hi, end), + _ => bug!("`constructor_intersects_pattern` called with {:?}", pat), + }; + let (ctor_from, ctor_to, ctor_end) = match *ctor { + ConstantValue(value) => (value, value, RangeEnd::Included), + ConstantRange(from, to, range_end) => (from, to, range_end), + _ => bug!(), + }; + let order_to = compare_const_vals(cx.tcx, ctor_to, pat_to, cx.param_env, pat_from.ty)?; + let order_from = + compare_const_vals(cx.tcx, ctor_from, pat_from, cx.param_env, pat_from.ty)?; + let included = (order_from != Ordering::Less) + && ((order_to == Ordering::Less) + || (pat_end == ctor_end && order_to == Ordering::Equal)); + if included { Some(PatStack::default()) } else { None } } - Single => Ok(true), - _ => bug!(), + _ => bug!("`constructor_intersects_pattern` called with {:?}", ctor), } } fn patterns_for_variant<'p, 'tcx>( + cx: &MatchCheckCtxt<'_, 'tcx>, subpatterns: &'p [FieldPat<'tcx>], - wild_patterns: &[&'p Pat<'tcx>]) - -> SmallVec<[&'p Pat<'tcx>; 2]> -{ - let mut result = SmallVec::from_slice(wild_patterns); + ctor_wild_subpatterns: &[&'p Pat<'tcx>], +) -> PatStack<'p, 'tcx> { + let mut result = SmallVec::from_slice(ctor_wild_subpatterns); for subpat in subpatterns { result[subpat.field.index()] = &subpat.pattern; } - debug!("patterns_for_variant({:#?}, {:#?}) = {:#?}", subpatterns, wild_patterns, result); - result + debug!( + "patterns_for_variant({:#?}, {:#?}) = {:#?}", + subpatterns, ctor_wild_subpatterns, result + ); + PatStack::from_vec(cx, result) } -/// This is the main specialization step. It expands the first pattern in the given row -/// into `arity` patterns based on the constructor. For most patterns, the step is trivial, -/// for instance tuple patterns are flattened and box patterns expand into their inner pattern. +/// This is the main specialization step. It expands the pattern into `arity` patterns based on the +/// constructor. For most patterns, the step is trivial, for instance tuple patterns are flattened +/// and box patterns expand into their inner pattern. Returns vec![] if the pattern does not have +/// the given constructor. See the top of the file for details. /// -/// OTOH, slice patterns with a subslice pattern (..tail) can be expanded into multiple -/// different patterns. /// Structure patterns with a partial wild pattern (Foo { a: 42, .. }) have their missing /// fields filled with wild patterns. -fn specialize<'p, 'a: 'p, 'tcx>( - cx: &mut MatchCheckCtxt<'a, 'tcx>, - r: &[&'p Pat<'tcx>], +fn specialize_one_pattern<'p, 'a: 'p, 'q: 'p, 'tcx>( + cx: &MatchCheckCtxt<'a, 'tcx>, + mut pat: &'q Pat<'tcx>, constructor: &Constructor<'tcx>, - wild_patterns: &[&'p Pat<'tcx>], -) -> Option; 2]>> { - let pat = &r[0]; + ctor_wild_subpatterns: &[&'p Pat<'tcx>], +) -> Option> { + while let PatKind::AscribeUserType { ref subpattern, .. } = *pat.kind { + pat = subpattern; + } - let head = match *pat.kind { - PatKind::AscribeUserType { ref subpattern, .. } => { - specialize(cx, ::std::slice::from_ref(&subpattern), constructor, wild_patterns) - } + if let Wildcard | MissingConstructors(_) = constructor { + // Both those cases capture a set of constructors that are not present in the head of + // current matrix. This means that we discard all non-wildcard constructors. + // See (MISSING-CTOR) at the top of the file for more details. + return match *pat.kind { + PatKind::Binding { .. } | PatKind::Wild => Some(PatStack::empty()), + _ => None, + }; + } + + match *pat.kind { + PatKind::AscribeUserType { .. } => unreachable!(), // Handled above PatKind::Binding { .. } | PatKind::Wild => { - Some(SmallVec::from_slice(wild_patterns)) + Some(PatStack::from_slice(cx, ctor_wild_subpatterns)) } PatKind::Variant { adt_def, variant_index, ref subpatterns, .. } => { let ref variant = adt_def.variants[variant_index]; - Some(Variant(variant.def_id)) - .filter(|variant_constructor| variant_constructor == constructor) - .map(|_| patterns_for_variant(subpatterns, wild_patterns)) + if Variant(variant.def_id) == *constructor { + Some(patterns_for_variant(cx, subpatterns, ctor_wild_subpatterns)) + } else { + None + } } PatKind::Leaf { ref subpatterns } => { - Some(patterns_for_variant(subpatterns, wild_patterns)) + Some(patterns_for_variant(cx, subpatterns, ctor_wild_subpatterns)) } - PatKind::Deref { ref subpattern } => { - Some(smallvec![subpattern]) - } + PatKind::Deref { ref subpattern } => Some(PatStack::from_pattern(cx, subpattern)), PatKind::Constant { value } if constructor.is_slice() => { // We extract an `Option` for the pointer because slices of zero @@ -1801,39 +2096,28 @@ fn specialize<'p, 'a: 'p, 'tcx>( // just integers. The only time they should be pointing to memory // is when they are subslices of nonzero slices. let (alloc, offset, n, ty) = match value.ty.kind { - ty::Array(t, n) => { - match value.val { - ConstValue::ByRef { offset, alloc, .. } => ( - alloc, - offset, - n.eval_usize(cx.tcx, cx.param_env), - t, - ), - _ => span_bug!( - pat.span, - "array pattern is {:?}", value, - ), + ty::Array(t, n) => match value.val { + ConstValue::ByRef { offset, alloc, .. } => { + (alloc, offset, n.eval_usize(cx.tcx, cx.param_env), t) } + _ => span_bug!(pat.span, "array pattern is {:?}", value,), }, ty::Slice(t) => { match value.val { - ConstValue::Slice { data, start, end } => ( - data, - Size::from_bytes(start as u64), - (end - start) as u64, - t, - ), + ConstValue::Slice { data, start, end } => { + (data, Size::from_bytes(start as u64), (end - start) as u64, t) + } ConstValue::ByRef { .. } => { - // FIXME(oli-obk): implement `deref` for `ConstValue` + // FIXME(oli-obk): implement `deref` for `ConstValue`. See #53708 return None; - }, + } _ => span_bug!( pat.span, "slice pattern constant must be scalar pair but is {:?}", value, ), } - }, + } _ => span_bug!( pat.span, "unexpected const-val {:?} with ctor {:?}", @@ -1841,102 +2125,91 @@ fn specialize<'p, 'a: 'p, 'tcx>( constructor, ), }; - if wild_patterns.len() as u64 == n { + if ctor_wild_subpatterns.len() as u64 == n { // convert a constant slice/array pattern to a list of patterns. - let layout = cx.tcx.layout_of(cx.param_env.and(ty)).ok()?; + let layout = if let Ok(layout) = cx.tcx.layout_of(cx.param_env.and(ty)) { + layout + } else { + return None; + }; let ptr = Pointer::new(AllocId(0), offset); - (0..n).map(|i| { - let ptr = ptr.offset(layout.size * i, &cx.tcx).ok()?; - let scalar = alloc.read_scalar( - &cx.tcx, ptr, layout.size, - ).ok()?; - let scalar = scalar.not_undef().ok()?; - let value = ty::Const::from_scalar(cx.tcx, scalar, ty); - let pattern = Pat { - ty, - span: pat.span, - kind: box PatKind::Constant { value }, - }; - Some(&*cx.pattern_arena.alloc(pattern)) - }).collect() + let stack: Option> = (0..n) + .map(|i| { + let ptr = ptr.offset(layout.size * i, &cx.tcx).ok()?; + let scalar = alloc.read_scalar(&cx.tcx, ptr, layout.size).ok()?; + let scalar = scalar.not_undef().ok()?; + let value = ty::Const::from_scalar(cx.tcx, scalar, ty); + let pattern = + Pat { ty, span: pat.span, kind: box PatKind::Constant { value } }; + Some(&*cx.pattern_arena.alloc(pattern)) + }) + .collect(); + match stack { + Some(v) => Some(PatStack::from_vec(cx, v)), + None => None, + } } else { None } } - PatKind::Constant { .. } | - PatKind::Range { .. } => { + PatKind::Constant { .. } | PatKind::Range { .. } => { // If the constructor is a: // - Single value: add a row if the pattern contains the constructor. // - Range: add a row if the constructor intersects the pattern. - if should_treat_range_exhaustively(cx.tcx, constructor) { - match (IntRange::from_ctor(cx.tcx, cx.param_env, constructor), - IntRange::from_pat(cx.tcx, cx.param_env, pat)) { - (Some(ctor), Some(pat)) => { - ctor.intersection(&pat).map(|_| { - let (pat_lo, pat_hi) = pat.range.into_inner(); - let (ctor_lo, ctor_hi) = ctor.range.into_inner(); - assert!(pat_lo <= ctor_lo && ctor_hi <= pat_hi); - smallvec![] - }) - } - _ => None, - } + if let Some(ps) = constructor_intersects_pattern(cx, constructor, pat) { + Some(ps) } else { - // Fallback for non-ranges and ranges that involve - // floating-point numbers, which are not conveniently handled - // by `IntRange`. For these cases, the constructor may not be a - // range so intersection actually devolves into being covered - // by the pattern. - match constructor_covered_by_range(cx.tcx, cx.param_env, constructor, pat) { - Ok(true) => Some(smallvec![]), - Ok(false) | Err(ErrorReported) => None, - } + None } } - PatKind::Array { ref prefix, ref slice, ref suffix } | - PatKind::Slice { ref prefix, ref slice, ref suffix } => { - match *constructor { - Slice(..) => { - let pat_len = prefix.len() + suffix.len(); - if let Some(slice_count) = wild_patterns.len().checked_sub(pat_len) { - if slice_count == 0 || slice.is_some() { - Some(prefix.iter().chain( - wild_patterns.iter().map(|p| *p) - .skip(prefix.len()) - .take(slice_count) - .chain(suffix.iter()) - ).collect()) - } else { - None - } + PatKind::Array { ref prefix, ref slice, ref suffix } + | PatKind::Slice { ref prefix, ref slice, ref suffix } => match *constructor { + FixedLenSlice(..) | VarLenSlice(..) => { + let pat_len = prefix.len() + suffix.len(); + if let Some(slice_count) = ctor_wild_subpatterns.len().checked_sub(pat_len) { + if slice_count == 0 || slice.is_some() { + Some(PatStack::from_vec( + cx, + prefix + .iter() + .chain( + ctor_wild_subpatterns + .iter() + .map(|p| *p) + .skip(prefix.len()) + .take(slice_count) + .chain(suffix.iter()), + ) + .collect(), + )) } else { None } + } else { + None } - ConstantValue(cv) => { - match slice_pat_covered_by_const( - cx.tcx, pat.span, cv, prefix, slice, suffix, cx.param_env, - ) { - Ok(true) => Some(smallvec![]), - Ok(false) => None, - Err(ErrorReported) => None - } + } + ConstantValue(cv) => { + match slice_pat_covered_by_const( + cx.tcx, + pat.span, + cv, + prefix, + slice, + suffix, + cx.param_env, + ) { + Ok(true) => Some(PatStack::default()), + Ok(false) | Err(ErrorReported) => None, } - _ => span_bug!(pat.span, - "unexpected ctor {:?} for slice pat", constructor) } - } + _ => span_bug!(pat.span, "unexpected ctor {:?} for slice pat", constructor), + }, PatKind::Or { .. } => { bug!("support for or-patterns has not been fully implemented yet."); } - }; - debug!("specialize({:#?}, {:#?}) = {:#?}", r[0], wild_patterns, head); - - head.map(|mut head| { - head.extend_from_slice(&r[1 ..]); - head - }) + } } diff --git a/src/librustc_mir/hair/pattern/check_match.rs b/src/librustc_mir/hair/pattern/check_match.rs index 9bed4fb66ea9d..0f4c039b3fb48 100644 --- a/src/librustc_mir/hair/pattern/check_match.rs +++ b/src/librustc_mir/hair/pattern/check_match.rs @@ -1,24 +1,23 @@ -use super::_match::{MatchCheckCtxt, Matrix, expand_pattern, is_useful}; use super::_match::Usefulness::*; use super::_match::WitnessPreference::*; +use super::_match::{expand_pattern, is_useful, MatchCheckCtxt, Matrix, PatStack}; -use super::{PatCtxt, PatternError, PatKind}; +use super::{PatCtxt, PatKind, PatternError}; +use rustc::lint; use rustc::session::Session; -use rustc::ty::{self, Ty, TyCtxt}; use rustc::ty::subst::{InternalSubsts, SubstsRef}; -use rustc::lint; +use rustc::ty::{self, Ty, TyCtxt}; use rustc_errors::{Applicability, DiagnosticBuilder}; use rustc::hir::def::*; use rustc::hir::def_id::DefId; -use rustc::hir::intravisit::{self, Visitor, NestedVisitorMap}; +use rustc::hir::intravisit::{self, NestedVisitorMap, Visitor}; use rustc::hir::{self, Pat}; -use smallvec::smallvec; use std::slice; -use syntax_pos::{Span, DUMMY_SP, MultiSpan}; +use syntax_pos::{MultiSpan, Span, DUMMY_SP}; crate fn check_match(tcx: TyCtxt<'_>, def_id: DefId) { let body_id = match tcx.hir().as_local_hir_id(def_id) { @@ -99,13 +98,15 @@ impl PatCtxt<'_, '_> { ::rustc::mir::interpret::struct_error( self.tcx.at(pat_span), "could not evaluate float literal (see issue #31407)", - ).emit(); + ) + .emit(); } PatternError::NonConstPath(span) => { ::rustc::mir::interpret::struct_error( self.tcx.at(span), "runtime values cannot be referenced in patterns", - ).emit(); + ) + .emit(); } } } @@ -122,12 +123,7 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { check_legality_of_bindings_in_at_patterns(self, pat); } - fn check_match( - &mut self, - scrut: &hir::Expr, - arms: &'tcx [hir::Arm], - source: hir::MatchSource - ) { + fn check_match(&mut self, scrut: &hir::Expr, arms: &'tcx [hir::Arm], source: hir::MatchSource) { for arm in arms { // First, check legality of move bindings. self.check_patterns(arm.guard.is_some(), &arm.pat); @@ -140,30 +136,38 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { MatchCheckCtxt::create_and_enter(self.tcx, self.param_env, module, |ref mut cx| { let mut have_errors = false; - let inlined_arms : Vec<(Vec<_>, _)> = arms.iter().map(|arm| ( - // HACK(or_patterns; Centril | dlrobertson): Remove this and - // correctly handle exhaustiveness checking for nested or-patterns. - match &arm.pat.kind { - hir::PatKind::Or(pats) => pats, - _ => std::slice::from_ref(&arm.pat), - }.iter().map(|pat| { - let mut patcx = PatCtxt::new( - self.tcx, - self.param_env.and(self.identity_substs), - self.tables - ); - patcx.include_lint_checks(); - let pattern = expand_pattern(cx, patcx.lower_pattern(&pat)); - if !patcx.errors.is_empty() { - patcx.report_inlining_errors(pat.span); - have_errors = true; - } - (pattern, &**pat) - }).collect(), - arm.guard.as_ref().map(|g| match g { - hir::Guard::If(ref e) => &**e, + let inlined_arms: Vec<(Vec<_>, _)> = arms + .iter() + .map(|arm| { + ( + // HACK(or_patterns; Centril | dlrobertson): Remove this and + // correctly handle exhaustiveness checking for nested or-patterns. + match &arm.pat.kind { + hir::PatKind::Or(pats) => pats, + _ => std::slice::from_ref(&arm.pat), + } + .iter() + .map(|pat| { + let mut patcx = PatCtxt::new( + self.tcx, + self.param_env.and(self.identity_substs), + self.tables, + ); + patcx.include_lint_checks(); + let pattern = expand_pattern(cx, patcx.lower_pattern(&pat)); + if !patcx.errors.is_empty() { + patcx.report_inlining_errors(pat.span); + have_errors = true; + } + (pattern, &**pat) + }) + .collect(), + arm.guard.as_ref().map(|g| match g { + hir::Guard::If(ref e) => &**e, + }), + ) }) - )).collect(); + .collect(); // Bail out early if inlining failed. if have_errors { @@ -171,7 +175,7 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { } // Fourth, check for unreachable arms. - check_arms(cx, &inlined_arms, source); + let matrix = check_arms(cx, &inlined_arms, source); // Then, if the match has no arms, check whether the scrutinee // is uninhabited. @@ -189,17 +193,16 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { def_span = self.tcx.hir().span_if_local(def.did); if def.variants.len() < 4 && !def.variants.is_empty() { // keep around to point at the definition of non-covered variants - missing_variants = def.variants.iter() - .map(|variant| variant.ident) - .collect(); + missing_variants = + def.variants.iter().map(|variant| variant.ident).collect(); } let is_non_exhaustive_and_non_local = def.is_variant_list_non_exhaustive() && !def.did.is_local(); !(is_non_exhaustive_and_non_local) && def.variants.is_empty() - }, - _ => false + } + _ => false, } }; if !scrutinee_is_uninhabited { @@ -207,18 +210,25 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { let mut err = create_e0004( self.tcx.sess, scrut.span, - format!("non-exhaustive patterns: {}", match missing_variants.len() { - 0 => format!("type `{}` is non-empty", pat_ty), - 1 => format!( - "pattern `{}` of type `{}` is not handled", - missing_variants[0].name, - pat_ty, - ), - _ => format!("multiple patterns of type `{}` are not handled", pat_ty), - }), + format!( + "non-exhaustive patterns: {}", + match missing_variants.len() { + 0 => format!("type `{}` is non-empty", pat_ty), + 1 => format!( + "pattern `{}` of type `{}` is not handled", + missing_variants[0].name, pat_ty, + ), + _ => format!( + "multiple patterns of type `{}` are not handled", + pat_ty + ), + } + ), + ); + err.help( + "ensure that all possible cases are being handled, \ + possibly by adding wildcards or more match arms", ); - err.help("ensure that all possible cases are being handled, \ - possibly by adding wildcards or more match arms"); if let Some(sp) = def_span { err.span_label(sp, format!("`{}` defined here", pat_ty)); } @@ -232,12 +242,6 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { return; } - let matrix: Matrix<'_, '_> = inlined_arms - .iter() - .filter(|&&(_, guard)| guard.is_none()) - .flat_map(|arm| &arm.0) - .map(|pat| smallvec![pat.0]) - .collect(); let scrut_ty = self.tables.node_type(scrut.hir_id); check_exhaustive(cx, scrut_ty, scrut.span, &matrix); }) @@ -246,15 +250,13 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { fn check_irrefutable(&self, pat: &'tcx Pat, origin: &str, sp: Option) { let module = self.tcx.hir().get_module_parent(pat.hir_id); MatchCheckCtxt::create_and_enter(self.tcx, self.param_env, module, |ref mut cx| { - let mut patcx = PatCtxt::new(self.tcx, - self.param_env.and(self.identity_substs), - self.tables); + let mut patcx = + PatCtxt::new(self.tcx, self.param_env.and(self.identity_substs), self.tables); patcx.include_lint_checks(); let pattern = patcx.lower_pattern(pat); let pattern_ty = pattern.ty; - let pats: Matrix<'_, '_> = vec![smallvec![ - expand_pattern(cx, pattern) - ]].into_iter().collect(); + let pats: Matrix<'_, '_> = + vec![PatStack::from_pattern(cx, expand_pattern(cx, pattern))].into_iter().collect(); let witnesses = match check_not_useful(cx, pattern_ty, &pats) { Ok(_) => return, @@ -263,9 +265,12 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { let joined_patterns = joined_uncovered_patterns(&witnesses); let mut err = struct_span_err!( - self.tcx.sess, pat.span, E0005, + self.tcx.sess, + pat.span, + E0005, "refutable pattern in {}: {} not covered", - origin, joined_patterns + origin, + joined_patterns ); let suggest_if_let = match &pat.kind { hir::PatKind::Path(hir::QPath::Resolved(None, path)) @@ -284,8 +289,10 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { }; if let (Some(span), true) = (sp, suggest_if_let) { - err.note("`let` bindings require an \"irrefutable pattern\", like a `struct` or \ - an `enum` with only one variant"); + err.note( + "`let` bindings require an \"irrefutable pattern\", like a `struct` or \ + an `enum` with only one variant", + ); if let Ok(snippet) = self.tcx.sess.source_map().span_to_snippet(span) { err.span_suggestion( span, @@ -294,8 +301,10 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { Applicability::HasPlaceholders, ); } - err.note("for more information, visit \ - https://doc.rust-lang.org/book/ch18-02-refutability.html"); + err.note( + "for more information, visit \ + https://doc.rust-lang.org/book/ch18-02-refutability.html", + ); } adt_defined_here(cx, &mut err, pattern_ty, &witnesses); @@ -308,11 +317,10 @@ impl<'tcx> MatchVisitor<'_, 'tcx> { /// This caused an irrefutable match failure in e.g. `let`. fn const_not_var(err: &mut DiagnosticBuilder<'_>, tcx: TyCtxt<'_>, pat: &Pat, path: &hir::Path) { let descr = path.res.descr(); - err.span_label(pat.span, format!( - "interpreted as {} {} pattern, not a new variable", - path.res.article(), - descr, - )); + err.span_label( + pat.span, + format!("interpreted as {} {} pattern, not a new variable", path.res.article(), descr,), + ); err.span_suggestion( pat.span, @@ -339,19 +347,26 @@ fn check_for_bindings_named_same_as_variants(cx: &MatchVisitor<'_, '_>, pat: &Pa } let pat_ty = cx.tables.pat_ty(p); if let ty::Adt(edef, _) = pat_ty.kind { - if edef.is_enum() && edef.variants.iter().any(|variant| { - variant.ident == ident && variant.ctor_kind == CtorKind::Const - }) { + if edef.is_enum() + && edef.variants.iter().any(|variant| { + variant.ident == ident && variant.ctor_kind == CtorKind::Const + }) + { let ty_path = cx.tcx.def_path_str(edef.did); - let mut err = struct_span_warn!(cx.tcx.sess, p.span, E0170, + let mut err = struct_span_warn!( + cx.tcx.sess, + p.span, + E0170, "pattern binding `{}` is named the same as one \ - of the variants of the type `{}`", - ident, ty_path); + of the variants of the type `{}`", + ident, + ty_path + ); err.span_suggestion( p.span, "to match on the variant, qualify the path", format!("{}::{}", ty_path, ident), - Applicability::MachineApplicable + Applicability::MachineApplicable, ); err.emit(); } @@ -370,30 +385,29 @@ fn pat_is_catchall(pat: &Pat) -> bool { hir::PatKind::Binding(.., None) => true, hir::PatKind::Binding(.., Some(ref s)) => pat_is_catchall(s), hir::PatKind::Ref(ref s, _) => pat_is_catchall(s), - hir::PatKind::Tuple(ref v, _) => v.iter().all(|p| { - pat_is_catchall(&p) - }), - _ => false + hir::PatKind::Tuple(ref v, _) => v.iter().all(|p| pat_is_catchall(&p)), + _ => false, } } // Check for unreachable patterns -fn check_arms<'tcx>( +fn check_arms<'p, 'tcx>( cx: &mut MatchCheckCtxt<'_, 'tcx>, - arms: &[(Vec<(&super::Pat<'tcx>, &hir::Pat)>, Option<&hir::Expr>)], + arms: &[(Vec<(&'p super::Pat<'tcx>, &hir::Pat)>, Option<&hir::Expr>)], source: hir::MatchSource, -) { +) -> Matrix<'p, 'tcx> { let mut seen = Matrix::empty(); let mut catchall = None; for (arm_index, &(ref pats, guard)) in arms.iter().enumerate() { for &(pat, hir_pat) in pats { - let v = smallvec![pat]; + let v = PatStack::from_pattern(cx, pat); match is_useful(cx, &seen, &v, LeaveOutWitness) { NotUseful => { match source { - hir::MatchSource::IfDesugar { .. } | - hir::MatchSource::WhileDesugar => bug!(), + hir::MatchSource::IfDesugar { .. } | hir::MatchSource::WhileDesugar => { + bug!() + } hir::MatchSource::IfLetDesugar { .. } => { cx.tcx.lint_hir( lint::builtin::IRREFUTABLE_LET_PATTERNS, @@ -410,9 +424,11 @@ fn check_arms<'tcx>( 0 => { cx.tcx.lint_hir( lint::builtin::UNREACHABLE_PATTERNS, - hir_pat.hir_id, pat.span, - "unreachable pattern"); - }, + hir_pat.hir_id, + pat.span, + "unreachable pattern", + ); + } // The arm with the wildcard pattern. 1 => { cx.tcx.lint_hir( @@ -421,13 +437,12 @@ fn check_arms<'tcx>( pat.span, "irrefutable while-let pattern", ); - }, + } _ => bug!(), } } - hir::MatchSource::ForLoopDesugar | - hir::MatchSource::Normal => { + hir::MatchSource::ForLoopDesugar | hir::MatchSource::Normal => { let mut err = cx.tcx.struct_span_lint_hir( lint::builtin::UNREACHABLE_PATTERNS, hir_pat.hir_id, @@ -444,12 +459,11 @@ fn check_arms<'tcx>( // Unreachable patterns in try and await expressions occur when one of // the arms are an uninhabited type. Which is OK. - hir::MatchSource::AwaitDesugar | - hir::MatchSource::TryDesugar => {} + hir::MatchSource::AwaitDesugar | hir::MatchSource::TryDesugar => {} } } Useful => (), - UsefulWithWitness(_) => bug!() + UsefulWithWitness(_) => bug!(), } if guard.is_none() { seen.push(v); @@ -459,6 +473,7 @@ fn check_arms<'tcx>( } } } + seen } fn check_not_useful( @@ -467,7 +482,7 @@ fn check_not_useful( matrix: &Matrix<'_, 'tcx>, ) -> Result<(), Vec>> { let wild_pattern = super::Pat { ty, span: DUMMY_SP, kind: box PatKind::Wild }; - match is_useful(cx, matrix, &[&wild_pattern], ConstructWitness) { + match is_useful(cx, matrix, &PatStack::from_pattern(cx, &wild_pattern), ConstructWitness) { NotUseful => Ok(()), // This is good, wildcard pattern isn't reachable. UsefulWithWitness(pats) => Err(if pats.is_empty() { vec![wild_pattern] @@ -491,14 +506,15 @@ fn check_exhaustive<'tcx>( let joined_patterns = joined_uncovered_patterns(&witnesses); let mut err = create_e0004( - cx.tcx.sess, sp, + cx.tcx.sess, + sp, format!("non-exhaustive patterns: {} not covered", joined_patterns), ); err.span_label(sp, pattern_not_covered_label(&witnesses, &joined_patterns)); adt_defined_here(cx, &mut err, scrut_ty, &witnesses); err.help( "ensure that all possible cases are being handled, \ - possibly by adding wildcards or more match arms" + possibly by adding wildcards or more match arms", ) .emit(); } @@ -551,7 +567,7 @@ fn maybe_point_at_variant(ty: Ty<'_>, patterns: &[super::Pat<'_>]) -> Vec // Don't point at variants that have already been covered due to other patterns to avoid // visual clutter. for pattern in patterns { - use PatKind::{AscribeUserType, Deref, Variant, Or, Leaf}; + use PatKind::{AscribeUserType, Deref, Leaf, Or, Variant}; match &*pattern.kind { AscribeUserType { subpattern, .. } | Deref { subpattern } => { covered.extend(maybe_point_at_variant(ty, slice::from_ref(&subpattern))); @@ -563,13 +579,15 @@ fn maybe_point_at_variant(ty: Ty<'_>, patterns: &[super::Pat<'_>]) -> Vec } covered.push(sp); - let pats = subpatterns.iter() + let pats = subpatterns + .iter() .map(|field_pattern| field_pattern.pattern.clone()) .collect::>(); covered.extend(maybe_point_at_variant(ty, &pats)); } Leaf { subpatterns } => { - let pats = subpatterns.iter() + let pats = subpatterns + .iter() .map(|field_pattern| field_pattern.pattern.clone()) .collect::>(); covered.extend(maybe_point_at_variant(ty, &pats)); @@ -654,7 +672,7 @@ fn check_legality_of_bindings_in_at_patterns(cx: &MatchVisitor<'_, '_>, pat: &Pa struct AtBindingPatternVisitor<'a, 'b, 'tcx> { cx: &'a MatchVisitor<'b, 'tcx>, - bindings_allowed: bool + bindings_allowed: bool, } impl<'v> Visitor<'v> for AtBindingPatternVisitor<'_, '_, '_> { @@ -666,10 +684,14 @@ impl<'v> Visitor<'v> for AtBindingPatternVisitor<'_, '_, '_> { match pat.kind { hir::PatKind::Binding(.., ref subpat) => { if !self.bindings_allowed { - struct_span_err!(self.cx.tcx.sess, pat.span, E0303, - "pattern bindings are not allowed after an `@`") - .span_label(pat.span, "not allowed after `@`") - .emit(); + struct_span_err!( + self.cx.tcx.sess, + pat.span, + E0303, + "pattern bindings are not allowed after an `@`" + ) + .span_label(pat.span, "not allowed after `@`") + .emit(); } if subpat.is_some() { diff --git a/src/test/ui/consts/const_let_refutable.stderr b/src/test/ui/consts/const_let_refutable.stderr index 7f15f02d4d37b..9acb4ad9cbbe5 100644 --- a/src/test/ui/consts/const_let_refutable.stderr +++ b/src/test/ui/consts/const_let_refutable.stderr @@ -1,8 +1,8 @@ -error[E0005]: refutable pattern in function argument: `&[]`, `&[_]` and `&[_, _, _]` not covered +error[E0005]: refutable pattern in function argument: `&[]`, `&[_]` and `&[_, _, _, ..]` not covered --> $DIR/const_let_refutable.rs:3:16 | LL | const fn slice([a, b]: &[i32]) -> i32 { - | ^^^^^^ patterns `&[]`, `&[_]` and `&[_, _, _]` not covered + | ^^^^^^ patterns `&[]`, `&[_]` and `&[_, _, _, ..]` not covered error[E0723]: can only call other `const fn` within a `const fn`, but `const <&i32 as std::ops::Add>::add` is not stable as `const fn` --> $DIR/const_let_refutable.rs:4:5 diff --git a/src/test/ui/uninhabited/always-inhabited-union-ref.rs b/src/test/ui/pattern/usefulness/always-inhabited-union-ref.rs similarity index 100% rename from src/test/ui/uninhabited/always-inhabited-union-ref.rs rename to src/test/ui/pattern/usefulness/always-inhabited-union-ref.rs diff --git a/src/test/ui/uninhabited/always-inhabited-union-ref.stderr b/src/test/ui/pattern/usefulness/always-inhabited-union-ref.stderr similarity index 100% rename from src/test/ui/uninhabited/always-inhabited-union-ref.stderr rename to src/test/ui/pattern/usefulness/always-inhabited-union-ref.stderr diff --git a/src/test/ui/exhaustive_integer_patterns.rs b/src/test/ui/pattern/usefulness/exhaustive_integer_patterns.rs similarity index 100% rename from src/test/ui/exhaustive_integer_patterns.rs rename to src/test/ui/pattern/usefulness/exhaustive_integer_patterns.rs diff --git a/src/test/ui/exhaustive_integer_patterns.stderr b/src/test/ui/pattern/usefulness/exhaustive_integer_patterns.stderr similarity index 100% rename from src/test/ui/exhaustive_integer_patterns.stderr rename to src/test/ui/pattern/usefulness/exhaustive_integer_patterns.stderr diff --git a/src/test/ui/guards-not-exhaustive.rs b/src/test/ui/pattern/usefulness/guards-not-exhaustive.rs similarity index 100% rename from src/test/ui/guards-not-exhaustive.rs rename to src/test/ui/pattern/usefulness/guards-not-exhaustive.rs diff --git a/src/test/ui/pattern/irrefutable-exhaustive-integer-binding.rs b/src/test/ui/pattern/usefulness/irrefutable-exhaustive-integer-binding.rs similarity index 100% rename from src/test/ui/pattern/irrefutable-exhaustive-integer-binding.rs rename to src/test/ui/pattern/usefulness/irrefutable-exhaustive-integer-binding.rs diff --git a/src/test/ui/irrefutable-unit.rs b/src/test/ui/pattern/usefulness/irrefutable-unit.rs similarity index 100% rename from src/test/ui/irrefutable-unit.rs rename to src/test/ui/pattern/usefulness/irrefutable-unit.rs diff --git a/src/test/ui/check_match/issue-35609.rs b/src/test/ui/pattern/usefulness/issue-35609.rs similarity index 100% rename from src/test/ui/check_match/issue-35609.rs rename to src/test/ui/pattern/usefulness/issue-35609.rs diff --git a/src/test/ui/check_match/issue-35609.stderr b/src/test/ui/pattern/usefulness/issue-35609.stderr similarity index 100% rename from src/test/ui/check_match/issue-35609.stderr rename to src/test/ui/pattern/usefulness/issue-35609.stderr diff --git a/src/test/ui/check_match/issue-43253.rs b/src/test/ui/pattern/usefulness/issue-43253.rs similarity index 100% rename from src/test/ui/check_match/issue-43253.rs rename to src/test/ui/pattern/usefulness/issue-43253.rs diff --git a/src/test/ui/check_match/issue-43253.stderr b/src/test/ui/pattern/usefulness/issue-43253.stderr similarity index 100% rename from src/test/ui/check_match/issue-43253.stderr rename to src/test/ui/pattern/usefulness/issue-43253.stderr diff --git a/src/test/ui/match/match-argm-statics-2.rs b/src/test/ui/pattern/usefulness/match-arm-statics-2.rs similarity index 100% rename from src/test/ui/match/match-argm-statics-2.rs rename to src/test/ui/pattern/usefulness/match-arm-statics-2.rs diff --git a/src/test/ui/match/match-argm-statics-2.stderr b/src/test/ui/pattern/usefulness/match-arm-statics-2.stderr similarity index 90% rename from src/test/ui/match/match-argm-statics-2.stderr rename to src/test/ui/pattern/usefulness/match-arm-statics-2.stderr index 8c54e030823af..8521e37d3fddc 100644 --- a/src/test/ui/match/match-argm-statics-2.stderr +++ b/src/test/ui/pattern/usefulness/match-arm-statics-2.stderr @@ -1,5 +1,5 @@ error[E0004]: non-exhaustive patterns: `(true, false)` not covered - --> $DIR/match-argm-statics-2.rs:17:11 + --> $DIR/match-arm-statics-2.rs:17:11 | LL | match (true, false) { | ^^^^^^^^^^^^^ pattern `(true, false)` not covered @@ -7,7 +7,7 @@ LL | match (true, false) { = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms error[E0004]: non-exhaustive patterns: `Some(Some(West))` not covered - --> $DIR/match-argm-statics-2.rs:29:11 + --> $DIR/match-arm-statics-2.rs:29:11 | LL | match Some(Some(North)) { | ^^^^^^^^^^^^^^^^^ pattern `Some(Some(West))` not covered @@ -15,7 +15,7 @@ LL | match Some(Some(North)) { = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms error[E0004]: non-exhaustive patterns: `Foo { bar: Some(North), baz: NewBool(true) }` not covered - --> $DIR/match-argm-statics-2.rs:48:11 + --> $DIR/match-arm-statics-2.rs:48:11 | LL | / struct Foo { LL | | bar: Option, diff --git a/src/test/ui/match/match-arm-statics.rs b/src/test/ui/pattern/usefulness/match-arm-statics.rs similarity index 100% rename from src/test/ui/match/match-arm-statics.rs rename to src/test/ui/pattern/usefulness/match-arm-statics.rs diff --git a/src/test/ui/match/match-arm-statics.stderr b/src/test/ui/pattern/usefulness/match-arm-statics.stderr similarity index 100% rename from src/test/ui/match/match-arm-statics.stderr rename to src/test/ui/pattern/usefulness/match-arm-statics.stderr diff --git a/src/test/ui/match/match-byte-array-patterns-2.rs b/src/test/ui/pattern/usefulness/match-byte-array-patterns-2.rs similarity index 100% rename from src/test/ui/match/match-byte-array-patterns-2.rs rename to src/test/ui/pattern/usefulness/match-byte-array-patterns-2.rs diff --git a/src/test/ui/match/match-byte-array-patterns-2.stderr b/src/test/ui/pattern/usefulness/match-byte-array-patterns-2.stderr similarity index 78% rename from src/test/ui/match/match-byte-array-patterns-2.stderr rename to src/test/ui/pattern/usefulness/match-byte-array-patterns-2.stderr index d53e2e25b3dbd..6e52072e3bfec 100644 --- a/src/test/ui/match/match-byte-array-patterns-2.stderr +++ b/src/test/ui/pattern/usefulness/match-byte-array-patterns-2.stderr @@ -6,11 +6,11 @@ LL | match buf { | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms -error[E0004]: non-exhaustive patterns: `&[]`, `&[_]`, `&[_, _]` and 3 more not covered +error[E0004]: non-exhaustive patterns: `&[..]` not covered --> $DIR/match-byte-array-patterns-2.rs:10:11 | LL | match buf { - | ^^^ patterns `&[]`, `&[_]`, `&[_, _]` and 3 more not covered + | ^^^ pattern `&[..]` not covered | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms diff --git a/src/test/ui/match/match-byte-array-patterns.rs b/src/test/ui/pattern/usefulness/match-byte-array-patterns.rs similarity index 100% rename from src/test/ui/match/match-byte-array-patterns.rs rename to src/test/ui/pattern/usefulness/match-byte-array-patterns.rs diff --git a/src/test/ui/match/match-byte-array-patterns.stderr b/src/test/ui/pattern/usefulness/match-byte-array-patterns.stderr similarity index 100% rename from src/test/ui/match/match-byte-array-patterns.stderr rename to src/test/ui/pattern/usefulness/match-byte-array-patterns.stderr diff --git a/src/test/ui/match/match-non-exhaustive.rs b/src/test/ui/pattern/usefulness/match-non-exhaustive.rs similarity index 100% rename from src/test/ui/match/match-non-exhaustive.rs rename to src/test/ui/pattern/usefulness/match-non-exhaustive.rs diff --git a/src/test/ui/match/match-non-exhaustive.stderr b/src/test/ui/pattern/usefulness/match-non-exhaustive.stderr similarity index 100% rename from src/test/ui/match/match-non-exhaustive.stderr rename to src/test/ui/pattern/usefulness/match-non-exhaustive.stderr diff --git a/src/test/ui/match/match-privately-empty.rs b/src/test/ui/pattern/usefulness/match-privately-empty.rs similarity index 100% rename from src/test/ui/match/match-privately-empty.rs rename to src/test/ui/pattern/usefulness/match-privately-empty.rs diff --git a/src/test/ui/match/match-privately-empty.stderr b/src/test/ui/pattern/usefulness/match-privately-empty.stderr similarity index 100% rename from src/test/ui/match/match-privately-empty.stderr rename to src/test/ui/pattern/usefulness/match-privately-empty.stderr diff --git a/src/test/ui/match/match-range-fail-dominate.rs b/src/test/ui/pattern/usefulness/match-range-fail-dominate.rs similarity index 100% rename from src/test/ui/match/match-range-fail-dominate.rs rename to src/test/ui/pattern/usefulness/match-range-fail-dominate.rs diff --git a/src/test/ui/match/match-range-fail-dominate.stderr b/src/test/ui/pattern/usefulness/match-range-fail-dominate.stderr similarity index 100% rename from src/test/ui/match/match-range-fail-dominate.stderr rename to src/test/ui/pattern/usefulness/match-range-fail-dominate.stderr diff --git a/src/test/ui/match/match-ref-ice.rs b/src/test/ui/pattern/usefulness/match-ref-ice.rs similarity index 100% rename from src/test/ui/match/match-ref-ice.rs rename to src/test/ui/pattern/usefulness/match-ref-ice.rs diff --git a/src/test/ui/match/match-ref-ice.stderr b/src/test/ui/pattern/usefulness/match-ref-ice.stderr similarity index 100% rename from src/test/ui/match/match-ref-ice.stderr rename to src/test/ui/pattern/usefulness/match-ref-ice.stderr diff --git a/src/test/ui/match/match-slice-patterns.rs b/src/test/ui/pattern/usefulness/match-slice-patterns.rs similarity index 80% rename from src/test/ui/match/match-slice-patterns.rs rename to src/test/ui/pattern/usefulness/match-slice-patterns.rs index afbeb61e4415a..af7fd53a1f1e9 100644 --- a/src/test/ui/match/match-slice-patterns.rs +++ b/src/test/ui/pattern/usefulness/match-slice-patterns.rs @@ -2,7 +2,7 @@ fn check(list: &[Option<()>]) { match list { - //~^ ERROR `&[_, Some(_), None, _]` not covered + //~^ ERROR `&[_, Some(_), .., None, _]` not covered &[] => {}, &[_] => {}, &[_, _] => {}, diff --git a/src/test/ui/match/match-slice-patterns.stderr b/src/test/ui/pattern/usefulness/match-slice-patterns.stderr similarity index 65% rename from src/test/ui/match/match-slice-patterns.stderr rename to src/test/ui/pattern/usefulness/match-slice-patterns.stderr index 24769db34c932..72ae5d5fe3b33 100644 --- a/src/test/ui/match/match-slice-patterns.stderr +++ b/src/test/ui/pattern/usefulness/match-slice-patterns.stderr @@ -1,8 +1,8 @@ -error[E0004]: non-exhaustive patterns: `&[_, Some(_), None, _]` not covered +error[E0004]: non-exhaustive patterns: `&[_, Some(_), .., None, _]` not covered --> $DIR/match-slice-patterns.rs:4:11 | LL | match list { - | ^^^^ pattern `&[_, Some(_), None, _]` not covered + | ^^^^ pattern `&[_, Some(_), .., None, _]` not covered | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms diff --git a/src/test/ui/match/match-vec-fixed.rs b/src/test/ui/pattern/usefulness/match-vec-fixed.rs similarity index 100% rename from src/test/ui/match/match-vec-fixed.rs rename to src/test/ui/pattern/usefulness/match-vec-fixed.rs diff --git a/src/test/ui/match/match-vec-fixed.stderr b/src/test/ui/pattern/usefulness/match-vec-fixed.stderr similarity index 100% rename from src/test/ui/match/match-vec-fixed.stderr rename to src/test/ui/pattern/usefulness/match-vec-fixed.stderr diff --git a/src/test/ui/match/match-vec-unreachable.rs b/src/test/ui/pattern/usefulness/match-vec-unreachable.rs similarity index 100% rename from src/test/ui/match/match-vec-unreachable.rs rename to src/test/ui/pattern/usefulness/match-vec-unreachable.rs diff --git a/src/test/ui/match/match-vec-unreachable.stderr b/src/test/ui/pattern/usefulness/match-vec-unreachable.stderr similarity index 100% rename from src/test/ui/match/match-vec-unreachable.stderr rename to src/test/ui/pattern/usefulness/match-vec-unreachable.stderr diff --git a/src/test/ui/binding/nested-exhaustive-match.rs b/src/test/ui/pattern/usefulness/nested-exhaustive-match.rs similarity index 100% rename from src/test/ui/binding/nested-exhaustive-match.rs rename to src/test/ui/pattern/usefulness/nested-exhaustive-match.rs diff --git a/src/test/ui/match/non-exhaustive-defined-here.rs b/src/test/ui/pattern/usefulness/non-exhaustive-defined-here.rs similarity index 100% rename from src/test/ui/match/non-exhaustive-defined-here.rs rename to src/test/ui/pattern/usefulness/non-exhaustive-defined-here.rs diff --git a/src/test/ui/match/non-exhaustive-defined-here.stderr b/src/test/ui/pattern/usefulness/non-exhaustive-defined-here.stderr similarity index 100% rename from src/test/ui/match/non-exhaustive-defined-here.stderr rename to src/test/ui/pattern/usefulness/non-exhaustive-defined-here.stderr diff --git a/src/test/ui/non-exhaustive/non-exhaustive-float-range-match.rs b/src/test/ui/pattern/usefulness/non-exhaustive-float-range-match.rs similarity index 100% rename from src/test/ui/non-exhaustive/non-exhaustive-float-range-match.rs rename to src/test/ui/pattern/usefulness/non-exhaustive-float-range-match.rs diff --git a/src/test/ui/non-exhaustive/non-exhaustive-float-range-match.stderr b/src/test/ui/pattern/usefulness/non-exhaustive-float-range-match.stderr similarity index 100% rename from src/test/ui/non-exhaustive/non-exhaustive-float-range-match.stderr rename to src/test/ui/pattern/usefulness/non-exhaustive-float-range-match.stderr diff --git a/src/test/ui/non-exhaustive/non-exhaustive-match-nested.rs b/src/test/ui/pattern/usefulness/non-exhaustive-match-nested.rs similarity index 100% rename from src/test/ui/non-exhaustive/non-exhaustive-match-nested.rs rename to src/test/ui/pattern/usefulness/non-exhaustive-match-nested.rs diff --git a/src/test/ui/non-exhaustive/non-exhaustive-match-nested.stderr b/src/test/ui/pattern/usefulness/non-exhaustive-match-nested.stderr similarity index 100% rename from src/test/ui/non-exhaustive/non-exhaustive-match-nested.stderr rename to src/test/ui/pattern/usefulness/non-exhaustive-match-nested.stderr diff --git a/src/test/ui/non-exhaustive/non-exhaustive-match.rs b/src/test/ui/pattern/usefulness/non-exhaustive-match.rs similarity index 99% rename from src/test/ui/non-exhaustive/non-exhaustive-match.rs rename to src/test/ui/pattern/usefulness/non-exhaustive-match.rs index 0e5a9203c5f80..bfca5352353a7 100644 --- a/src/test/ui/non-exhaustive/non-exhaustive-match.rs +++ b/src/test/ui/pattern/usefulness/non-exhaustive-match.rs @@ -44,7 +44,7 @@ fn main() { } let vec = vec![0.5f32]; let vec: &[f32] = &vec; - match *vec { //~ ERROR non-exhaustive patterns: `[_, _, _, _]` not covered + match *vec { //~ ERROR non-exhaustive patterns: `[_, _, _, _, ..]` not covered [0.1, 0.2, 0.3] => (), [0.1, 0.2] => (), [0.1] => (), diff --git a/src/test/ui/non-exhaustive/non-exhaustive-match.stderr b/src/test/ui/pattern/usefulness/non-exhaustive-match.stderr similarity index 95% rename from src/test/ui/non-exhaustive/non-exhaustive-match.stderr rename to src/test/ui/pattern/usefulness/non-exhaustive-match.stderr index 5dba05e16427a..577867e4e7122 100644 --- a/src/test/ui/non-exhaustive/non-exhaustive-match.stderr +++ b/src/test/ui/pattern/usefulness/non-exhaustive-match.stderr @@ -66,11 +66,11 @@ LL | match *vec { | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms -error[E0004]: non-exhaustive patterns: `[_, _, _, _]` not covered +error[E0004]: non-exhaustive patterns: `[_, _, _, _, ..]` not covered --> $DIR/non-exhaustive-match.rs:47:11 | LL | match *vec { - | ^^^^ pattern `[_, _, _, _]` not covered + | ^^^^ pattern `[_, _, _, _, ..]` not covered | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms diff --git a/src/test/ui/non-exhaustive/non-exhaustive-pattern-witness.rs b/src/test/ui/pattern/usefulness/non-exhaustive-pattern-witness.rs similarity index 100% rename from src/test/ui/non-exhaustive/non-exhaustive-pattern-witness.rs rename to src/test/ui/pattern/usefulness/non-exhaustive-pattern-witness.rs diff --git a/src/test/ui/non-exhaustive/non-exhaustive-pattern-witness.stderr b/src/test/ui/pattern/usefulness/non-exhaustive-pattern-witness.stderr similarity index 100% rename from src/test/ui/non-exhaustive/non-exhaustive-pattern-witness.stderr rename to src/test/ui/pattern/usefulness/non-exhaustive-pattern-witness.stderr diff --git a/src/test/ui/refutable-pattern-errors.rs b/src/test/ui/pattern/usefulness/refutable-pattern-errors.rs similarity index 100% rename from src/test/ui/refutable-pattern-errors.rs rename to src/test/ui/pattern/usefulness/refutable-pattern-errors.rs diff --git a/src/test/ui/refutable-pattern-errors.stderr b/src/test/ui/pattern/usefulness/refutable-pattern-errors.stderr similarity index 100% rename from src/test/ui/refutable-pattern-errors.stderr rename to src/test/ui/pattern/usefulness/refutable-pattern-errors.stderr diff --git a/src/test/ui/refutable-pattern-in-fn-arg.rs b/src/test/ui/pattern/usefulness/refutable-pattern-in-fn-arg.rs similarity index 100% rename from src/test/ui/refutable-pattern-in-fn-arg.rs rename to src/test/ui/pattern/usefulness/refutable-pattern-in-fn-arg.rs diff --git a/src/test/ui/refutable-pattern-in-fn-arg.stderr b/src/test/ui/pattern/usefulness/refutable-pattern-in-fn-arg.stderr similarity index 100% rename from src/test/ui/refutable-pattern-in-fn-arg.stderr rename to src/test/ui/pattern/usefulness/refutable-pattern-in-fn-arg.stderr diff --git a/src/test/ui/pattern/usefulness/slice-patterns.rs b/src/test/ui/pattern/usefulness/slice-patterns.rs new file mode 100644 index 0000000000000..da2d40caf1a40 --- /dev/null +++ b/src/test/ui/pattern/usefulness/slice-patterns.rs @@ -0,0 +1,110 @@ +#![feature(slice_patterns)] +#![deny(unreachable_patterns)] + +fn main() { + let s: &[bool] = &[true; 0]; + let s0: &[bool; 0] = &[]; + let s1: &[bool; 1] = &[false; 1]; + let s2: &[bool; 2] = &[false; 2]; + let s3: &[bool; 3] = &[false; 3]; + + let [] = s0; + let [_] = s1; + let [_, _] = s2; + + let [..] = s; + let [..] = s0; + let [..] = s1; + let [..] = s2; + let [..] = s3; + + let [_, _, ..] = s2; + let [_, .., _] = s2; + let [.., _, _] = s2; + + match s1 { + [true, ..] => {} + [.., false] => {} + } + match s2 { + //~^ ERROR `&[false, true]` not covered + [true, ..] => {} + [.., false] => {} + } + match s3 { + //~^ ERROR `&[false, _, true]` not covered + [true, ..] => {} + [.., false] => {} + } + match s { + //~^ ERROR `&[false, .., true]` not covered + [] => {} + [true, ..] => {} + [.., false] => {} + } + + match s3 { + //~^ ERROR `&[false, _, _]` not covered + [true, .., true] => {} + } + match s { + //~^ ERROR `&[_, ..]` not covered + [] => {} + } + match s { + //~^ ERROR `&[_, _, ..]` not covered + [] => {} + [_] => {} + } + match s { + //~^ ERROR `&[false, ..]` not covered + [] => {} + [true, ..] => {} + } + match s { + //~^ ERROR `&[false, _, ..]` not covered + [] => {} + [_] => {} + [true, ..] => {} + } + match s { + //~^ ERROR `&[_, .., false]` not covered + [] => {} + [_] => {} + [.., true] => {} + } + + match s { + [true, ..] => {} + [true, ..] => {} //~ ERROR unreachable pattern + [true] => {} //~ ERROR unreachable pattern + [..] => {} + } + match s { + [.., true] => {} + [.., true] => {} //~ ERROR unreachable pattern + [true] => {} //~ ERROR unreachable pattern + [..] => {} + } + match s { + [false, .., true] => {} + [false, .., true] => {} //~ ERROR unreachable pattern + [false, true] => {} //~ ERROR unreachable pattern + [false] => {} + [..] => {} + } + match s { + //~^ ERROR `&[_, _, .., true]` not covered + [] => {} + [_] => {} + [_, _] => {} + [.., false] => {} + } + match s { + //~^ ERROR `&[true, _, .., _]` not covered + [] => {} + [_] => {} + [_, _] => {} + [false, .., false] => {} + } +} diff --git a/src/test/ui/pattern/usefulness/slice-patterns.stderr b/src/test/ui/pattern/usefulness/slice-patterns.stderr new file mode 100644 index 0000000000000..6afe4705b0e69 --- /dev/null +++ b/src/test/ui/pattern/usefulness/slice-patterns.stderr @@ -0,0 +1,133 @@ +error[E0004]: non-exhaustive patterns: `&[false, true]` not covered + --> $DIR/slice-patterns.rs:29:11 + | +LL | match s2 { + | ^^ pattern `&[false, true]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[false, _, true]` not covered + --> $DIR/slice-patterns.rs:34:11 + | +LL | match s3 { + | ^^ pattern `&[false, _, true]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[false, .., true]` not covered + --> $DIR/slice-patterns.rs:39:11 + | +LL | match s { + | ^ pattern `&[false, .., true]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[false, _, _]` not covered + --> $DIR/slice-patterns.rs:46:11 + | +LL | match s3 { + | ^^ pattern `&[false, _, _]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[_, ..]` not covered + --> $DIR/slice-patterns.rs:50:11 + | +LL | match s { + | ^ pattern `&[_, ..]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[_, _, ..]` not covered + --> $DIR/slice-patterns.rs:54:11 + | +LL | match s { + | ^ pattern `&[_, _, ..]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[false, ..]` not covered + --> $DIR/slice-patterns.rs:59:11 + | +LL | match s { + | ^ pattern `&[false, ..]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[false, _, ..]` not covered + --> $DIR/slice-patterns.rs:64:11 + | +LL | match s { + | ^ pattern `&[false, _, ..]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[_, .., false]` not covered + --> $DIR/slice-patterns.rs:70:11 + | +LL | match s { + | ^ pattern `&[_, .., false]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error: unreachable pattern + --> $DIR/slice-patterns.rs:79:9 + | +LL | [true, ..] => {} + | ^^^^^^^^^^ + | +note: lint level defined here + --> $DIR/slice-patterns.rs:2:9 + | +LL | #![deny(unreachable_patterns)] + | ^^^^^^^^^^^^^^^^^^^^ + +error: unreachable pattern + --> $DIR/slice-patterns.rs:80:9 + | +LL | [true] => {} + | ^^^^^^ + +error: unreachable pattern + --> $DIR/slice-patterns.rs:85:9 + | +LL | [.., true] => {} + | ^^^^^^^^^^ + +error: unreachable pattern + --> $DIR/slice-patterns.rs:86:9 + | +LL | [true] => {} + | ^^^^^^ + +error: unreachable pattern + --> $DIR/slice-patterns.rs:91:9 + | +LL | [false, .., true] => {} + | ^^^^^^^^^^^^^^^^^ + +error: unreachable pattern + --> $DIR/slice-patterns.rs:92:9 + | +LL | [false, true] => {} + | ^^^^^^^^^^^^^ + +error[E0004]: non-exhaustive patterns: `&[_, _, .., true]` not covered + --> $DIR/slice-patterns.rs:96:11 + | +LL | match s { + | ^ pattern `&[_, _, .., true]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error[E0004]: non-exhaustive patterns: `&[true, _, .., _]` not covered + --> $DIR/slice-patterns.rs:103:11 + | +LL | match s { + | ^ pattern `&[true, _, .., _]` not covered + | + = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms + +error: aborting due to 17 previous errors + +For more information about this error, try `rustc --explain E0004`. diff --git a/src/test/ui/structs/struct-like-enum-nonexhaustive.rs b/src/test/ui/pattern/usefulness/struct-like-enum-nonexhaustive.rs similarity index 100% rename from src/test/ui/structs/struct-like-enum-nonexhaustive.rs rename to src/test/ui/pattern/usefulness/struct-like-enum-nonexhaustive.rs diff --git a/src/test/ui/structs/struct-like-enum-nonexhaustive.stderr b/src/test/ui/pattern/usefulness/struct-like-enum-nonexhaustive.stderr similarity index 100% rename from src/test/ui/structs/struct-like-enum-nonexhaustive.stderr rename to src/test/ui/pattern/usefulness/struct-like-enum-nonexhaustive.stderr diff --git a/src/test/ui/structs/struct-pattern-match-useless.rs b/src/test/ui/pattern/usefulness/struct-pattern-match-useless.rs similarity index 100% rename from src/test/ui/structs/struct-pattern-match-useless.rs rename to src/test/ui/pattern/usefulness/struct-pattern-match-useless.rs diff --git a/src/test/ui/structs/struct-pattern-match-useless.stderr b/src/test/ui/pattern/usefulness/struct-pattern-match-useless.stderr similarity index 100% rename from src/test/ui/structs/struct-pattern-match-useless.stderr rename to src/test/ui/pattern/usefulness/struct-pattern-match-useless.stderr diff --git a/src/test/ui/tuple/tuple-struct-nonexhaustive.rs b/src/test/ui/pattern/usefulness/tuple-struct-nonexhaustive.rs similarity index 100% rename from src/test/ui/tuple/tuple-struct-nonexhaustive.rs rename to src/test/ui/pattern/usefulness/tuple-struct-nonexhaustive.rs diff --git a/src/test/ui/tuple/tuple-struct-nonexhaustive.stderr b/src/test/ui/pattern/usefulness/tuple-struct-nonexhaustive.stderr similarity index 100% rename from src/test/ui/tuple/tuple-struct-nonexhaustive.stderr rename to src/test/ui/pattern/usefulness/tuple-struct-nonexhaustive.stderr diff --git a/src/test/ui/uninhabited/uninhabited-matches-feature-gated.stderr b/src/test/ui/uninhabited/uninhabited-matches-feature-gated.stderr index a49344e45cec6..7af6075262c6d 100644 --- a/src/test/ui/uninhabited/uninhabited-matches-feature-gated.stderr +++ b/src/test/ui/uninhabited/uninhabited-matches-feature-gated.stderr @@ -30,11 +30,11 @@ LL | let _ = match x {}; | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms -error[E0004]: non-exhaustive patterns: `&[_]` not covered +error[E0004]: non-exhaustive patterns: `&[_, ..]` not covered --> $DIR/uninhabited-matches-feature-gated.rs:21:19 | LL | let _ = match x { - | ^ pattern `&[_]` not covered + | ^ pattern `&[_, ..]` not covered | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms