Skip to content

bundle of doc improvements #8974

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
May 28, 2020
8 changes: 4 additions & 4 deletions docs/docs/reference/changed-features/implicit-resolution.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
layout: doc-page
title: "Changes in Implicit Resolution"
---

This page describes changes to the implicit resolution that apply both to the new `given`s and to the old-style `implicit`s in Dotty.
Implicit resolution uses a new algorithm which caches implicit results
more aggressively for performance. There are also some changes that
affect implicits on the language level.
Expand Down Expand Up @@ -39,7 +39,7 @@ affect implicits on the language level.
due to _shadowing_ (where an implicit is hidden by a nested definition)
no longer applies.

3. Package prefixes no longer contribute to the implicit scope of a type.
3. Package prefixes no longer contribute to the implicit search scope of a type.
Example:
```scala
package p
Expand All @@ -52,7 +52,7 @@ affect implicits on the language level.
```
Both `a` and `b` are visible as implicits at the point of the definition
of `type C`. However, a reference to `p.o.C` outside of package `p` will
have only `b` in its implicit scope but not `a`.
have only `b` in its implicit search scope but not `a`.

4. The treatment of ambiguity errors has changed. If an ambiguity is encountered
in some recursive step of an implicit search, the ambiguity is propagated to the caller.
Expand Down Expand Up @@ -85,7 +85,7 @@ affect implicits on the language level.
5. The treatment of divergence errors has also changed. A divergent implicit is
treated as a normal failure, after which alternatives are still tried. This also makes
sense: Encountering a divergent implicit means that we assume that no finite
solution can be found on the given path, but another path can still be tried. By contrast
solution can be found on the corresponding path, but another path can still be tried. By contrast
most (but not all) divergence errors in Scala 2 would terminate the implicit
search as a whole.

Expand Down
10 changes: 5 additions & 5 deletions docs/docs/reference/changed-features/numeric-literals.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,9 @@ val x = -10_000_000_000
```
gives a type error, since without an expected type `-10_000_000_000` is treated by rule (3) as an `Int` literal, but it is too large for that type.

### The FromDigits Class
### The FromDigits Trait

To allow numeric literals, a type simply has to define a given instance of the
To allow numeric literals, a type simply has to define a `given` instance of the
`scala.util.FromDigits` typeclass, or one of its subclasses. `FromDigits` is defined
as follows:
```scala
Expand Down Expand Up @@ -153,7 +153,7 @@ object BigFloat {
BigFloat(BigInt(intPart), exponent)
}
```
To accept `BigFloat` literals, all that's needed in addition is a given instance of type
To accept `BigFloat` literals, all that's needed in addition is a `given` instance of type
`FromDigits.Floating[BigFloat]`:
```scala
given FromDigits as FromDigits.Floating[BigFloat] {
Expand Down Expand Up @@ -196,7 +196,7 @@ object BigFloat {
}
```
Note that an inline method cannot directly fill in for an abstract method, since it produces
no code that can be executed at runtime. That's why we define an intermediary class
no code that can be executed at runtime. That is why we define an intermediary class
`FromDigits` that contains a fallback implementation which is then overridden by the inline
method in the `FromDigits` given instance. That method is defined in terms of a macro
implementation method `fromDigitsImpl`. Here is its definition:
Expand All @@ -220,7 +220,7 @@ implementation method `fromDigitsImpl`. Here is its definition:
```
The macro implementation takes an argument of type `Expr[String]` and yields
a result of type `Expr[BigFloat]`. It tests whether its argument is a constant
string. If that's the case, it converts the string using the `apply` method
string. If that is the case, it converts the string using the `apply` method
and lifts the resulting `BigFloat` back to `Expr` level. For non-constant
strings `fromDigitsImpl(digits)` is simply `apply(digits)`, i.e. everything is
evaluated at runtime in this case.
Expand Down
21 changes: 11 additions & 10 deletions docs/docs/reference/contextual/multiversal-equality.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,19 +67,19 @@ given Eql[B, A] = Eql.derived
The `scala.Eql` object defines a number of `Eql` given instances that together
define a rule book for what standard types can be compared (more details below).

There's also a "fallback" instance named `eqlAny` that allows comparisons
There is also a "fallback" instance named `eqlAny` that allows comparisons
over all types that do not themselves have an `Eql` given. `eqlAny` is defined as follows:

```scala
def eqlAny[L, R]: Eql[L, R] = Eql.derived
```

Even though `eqlAny` is not declared a given, the compiler will still construct an `eqlAny` instance as answer to an implicit search for the
Even though `eqlAny` is not declared as `given`, the compiler will still construct an `eqlAny` instance as answer to an implicit search for the
type `Eql[L, R]`, unless `L` or `R` have `Eql` instances
defined on them, or the language feature `strictEquality` is enabled
defined on them, or the language feature `strictEquality` is enabled.

The primary motivation for having `eqlAny` is backwards compatibility,
if this is of no concern, one can disable `eqlAny` by enabling the language
The primary motivation for having `eqlAny` is backwards compatibility.
If this is of no concern, one can disable `eqlAny` by enabling the language
feature `strictEquality`. As for all language features this can be either
done with an import

Expand Down Expand Up @@ -112,7 +112,7 @@ The precise rules for equality checking are as follows.

If the `strictEquality` feature is enabled then
a comparison using `x == y` or `x != y` between values `x: T` and `y: U`
is legal if there is a given of type `Eql[T, U]`.
is legal if there is a `given` of type `Eql[T, U]`.

In the default case where the `strictEquality` feature is not enabled the comparison is
also legal if
Expand All @@ -123,8 +123,8 @@ also legal if

Explanations:

- _lifting_ a type `S` means replacing all references to abstract types
in covariant positions of `S` by their upper bound, and to replacing
- _lifting_ a type `S` means replacing all references to abstract types
in covariant positions of `S` by their upper bound, and replacing
all refinement types in covariant positions of `S` by their parent.
- a type `T` has a _reflexive_ `Eql` instance if the implicit search for `Eql[T, T]`
succeeds.
Expand Down Expand Up @@ -153,8 +153,9 @@ Instances are defined so that every one of these types has a _reflexive_ `Eql` i
## Why Two Type Parameters?

One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional
implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before,
we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example.
implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands.
One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before,
we are dealing with a refinement of pre-existing, universal equality. It is best illustrated through an example.

Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was:
```scala
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/reference/metaprogramming/tasty-inspect.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ libraryDependencies += "ch.epfl.lamp" %% "dotty-tasty-inspector" % scalaVersion.

TASTy files contain the full typed tree of a class including source positions
and documentation. This is ideal for tools that analyze or extract semantic
information of the code. To avoid the hassle of working directly with the TASTy
information from the code. To avoid the hassle of working directly with the TASTy
file we provide the `TastyInspector` which loads the contents and exposes it
through the TASTy reflect API.

Expand Down Expand Up @@ -42,8 +42,8 @@ object Test {
}
```

Note that if we need to run the main (in an object called `Test`) after
compilation we need make available the compiler to the runtime:
Note that if we need to run the main (in the example below defined in an object called `Test`) after
compilation we need to make the compiler available to the runtime:

```shell
dotc -d out Test.scala
Expand Down
17 changes: 8 additions & 9 deletions docs/docs/reference/metaprogramming/tasty-reflect.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ You may find all you need without using TASTy Reflect.
## API: From quotes and splices to TASTy reflect trees and back

With `quoted.Expr` and `quoted.Type` we can compute code but also analyze code
by inspecting the ASTs. [Macros](./macros.md) provides the guarantee that the
by inspecting the ASTs. [Macros](./macros.md) provide the guarantee that the
generation of code will be type-correct. Using TASTy Reflect will break these
guarantees and may fail at macro expansion time, hence additional explicit
checks must be done.
Expand Down Expand Up @@ -64,8 +64,7 @@ To easily know which extractors are needed, the `showExtractors` method on a
The method `qctx.tasty.Term.seal` provides a way to go back to a
`quoted.Expr[Any]`. Note that the type is `Expr[Any]`. Consequently, the type
must be set explicitly with a checked `cast` call. If the type does not conform
to it an exception will be thrown. In the code above, we could have replaced
`Expr(n)` by `xTree.seal.cast[Int]`.
to it an exception will be thrown at runtime.

### Obtaining the underlying argument

Expand All @@ -92,8 +91,8 @@ macro(this.checkCondition())

### Positions

The tasty context provides a `rootPosition` value. For macros it corresponds to
the expansion site. The macro authors can obtain various information about that
The tasty context provides a `rootPosition` value. It corresponds to
the expansion site for macros. The macro authors can obtain various information about that
expansion site. The example below shows how we can obtain position information
such as the start line, the end line or even the source code at the expansion
point.
Expand All @@ -117,10 +116,10 @@ def macroImpl()(qctx: QuoteContext): Expr[Unit] = {
### Tree Utilities

`scala.tasty.reflect` contains three facilities for tree traversal and
transformations.
transformation.

`TreeAccumulator` ties the knot of a traversal. By calling `foldOver(x, tree))`
we can dive in the `tree` node and start accumulating values of type `X` (e.g.,
we can dive into the `tree` node and start accumulating values of type `X` (e.g.,
of type List[Symbol] if we want to collect symbols). The code below, for
example, collects the pattern variables of a tree.

Expand All @@ -142,8 +141,8 @@ but without returning any value. Finally a `TreeMap` performs a transformation.
#### Let

`scala.tasty.Reflection` also offers a method `let` that allows us
to bind the `rhs` to a `val` and use it in `body`. Additionally, `lets` binds
the given `terms` to names and use them in the `body`. Their type definitions
to bind the `rhs` (right-hand side) to a `val` and use it in `body`. Additionally, `lets` binds
the given `terms` to names and allows to use them in the `body`. Their type definitions
are shown below:

```scala
Expand Down
4 changes: 2 additions & 2 deletions docs/docs/reference/new-types/intersection-types-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,11 +45,11 @@ A & B <: B A & B <: A
In another word, `A & B` is the same type as `B & A`, in the sense that the two types
have the same values and are subtypes of each other.

If `C` is a type constructor, the join `C[A] & C[B]` is simplified by pulling the
intersection inside the constructor, using the following two rules:
If `C` is a type constructor, then `C[A] & C[B]` can be simplified using the following three rules:

- If `C` is covariant, `C[A] & C[B] ~> C[A & B]`
- If `C` is contravariant, `C[A] & C[B] ~> C[A | B]`
- If `C` is non-variant, emit a compile error

When `C` is covariant, `C[A & B] <: C[A] & C[B]` can be derived:

Expand Down
10 changes: 5 additions & 5 deletions docs/docs/reference/new-types/intersection-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,18 @@ The type `S & T` represents values that are of the type `S` and `T` at the same

```scala
trait Resettable {
def reset(): this.type
def reset(): Unit
}
trait Growable[T] {
def add(x: T): this.type
def add(t: T): Unit
}
def f(x: Resettable & Growable[String]) = {
x.reset()
x.add("first")
}
```

The value `x` is required to be _both_ a `Resettable` and a
The parameter `x` is required to be _both_ a `Resettable` and a
`Growable[String]`.

The members of an intersection type `A & B` are all the members of `A` and all
Expand Down Expand Up @@ -51,8 +51,8 @@ can be further simplified to `List[A & B]` because `List` is
covariant.

One might wonder how the compiler could come up with a definition for
`children` of type `List[A & B]` since all its is given are `children`
definitions of type `List[A]` and `List[B]`. The answer is it does not
`children` of type `List[A & B]` since what is given are `children`
definitions of type `List[A]` and `List[B]`. The answer is the compiler does not
need to. `A & B` is just a type that represents a set of requirements for
values of the type. At the point where a value is _constructed_, one
must make sure that all inherited members are correctly defined.
Expand Down
16 changes: 12 additions & 4 deletions docs/docs/reference/new-types/union-types-spec.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,17 +76,17 @@ class A extends C[A] with D
class B extends C[B] with D with E
```

The join of `A | B` is `C[A | B] with D`
The join of `A | B` is `C[A | B] & D`

## Type inference

When inferring the result type of a definition (`val`, `var`, or `def`), if the
type we are about to infer is a union type, we replace it by its join.
When inferring the result type of a definition (`val`, `var`, or `def`) and the
type we are about to infer is a union type, then we replace it by its join.
Similarly, when instantiating a type argument, if the corresponding type
parameter is not upper-bounded by a union type and the type we are about to
instantiate is a union type, we replace it by its join. This mirrors the
treatment of singleton types which are also widened to their underlying type
unless explicitly specified. and the motivation is the same: inferring types
unless explicitly specified. The motivation is the same: inferring types
which are "too precise" can lead to unintuitive typechecking issues later on.

Note: Since this behavior limits the usability of union types, it might
Expand Down Expand Up @@ -127,6 +127,14 @@ trait B { def hello: String }
def test(x: A | B) = x.hello // error: value `hello` is not a member of A | B
```

On the otherhand, the following would be allowed
```scala
trait C { def hello: String }
trait A extends C with D
trait B extends C with E

def test(x: A | B) = x.hello // ok as `hello` is a member of the join of A | B which is C

## Exhaustivity checking

If the selector of a pattern match is a union type, the match is considered
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/reference/other-new-features/export.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,9 @@ export O.c
def f: c.T = ...
```

<a id="note_class"></a>
Export clauses can appear in classes or they can appear at the top-level. An export clause cannot appear as a statement in a block.

<a id="note_class"></a>
(\*) Note: Unless otherwise stated, the term "class" in this discussion also includes object and trait definitions.

### Motivation
Expand Down
8 changes: 5 additions & 3 deletions docs/docs/reference/other-new-features/named-typeargs.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ title: "Named Type Arguments"

**Note:** This feature is implemented in Dotty, but is not expected to be part of Scala 3.0.

Type arguments of methods can now be named, as well as by position. Example:
Type arguments of methods can now be specified by name as well as by position. Example:

``` scala
def construct[Elem, Coll[_]](xs: Elem*): Coll[Elem] = ???
Expand All @@ -15,8 +15,10 @@ val xs2 = construct[Coll = List](1, 2, 3)
```

Similar to a named value argument `(x = e)`, a named type argument
`[X = T]` instantiates the type parameter `X` to the type `T`. Type
arguments must be all named or un-named, mixtures of named and
`[X = T]` instantiates the type parameter `X` to the type `T`.
Named type arguments do not have to be in order (see `xs1` above) and
unspecified arguments are inferred by the compiler (see `xs2` above).
Type arguments must be all named or un-named, mixtures of named and
positional type arguments are not supported.

## Motivation
Expand Down