-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Add transparent types #4727
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add transparent types #4727
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello, and thank you for opening this PR! 🎉
All contributors have signed the CLA, thank you! ❤️
Commit Messages
We want to keep history, but for that to actually be useful we have
some rules on how to format our commit messages (relevant xkcd).
Please stick to these guidelines for commit messages:
- Separate subject from body with a blank line
- When fixing an issue, start your commit message with
Fix #<ISSUE-NBR>:
- Limit the subject line to 72 characters
- Capitalize the subject line
- Do not end the subject line with a period
- Use the imperative mood in the subject line ("Add" instead of "Added")
- Wrap the body at 80 characters
- Use the body to explain what and why vs. how
adapted from https://chris.beams.io/posts/git-commit
Have an awesome day! ☀️
A class is pure for the purpose of reducing projections in inlinig if none of its baseclasses has an initializer. To make this robust wrt compilation order, we need to move computation of NoInits flags from Typer to the class completer. # Conflicts: # compiler/src/dotty/tools/dotc/typer/Inliner.scala
Four improvments: 1. Applications of stable and pure functions to pure arguments are pure. This is important since constructors of pure classes are marked Stable. 2. New(...) expressions are pure (any side effects are caused by the constructor application). 3. Module values are pure of their module classes are pure. 4. scala.Product and scala.Serializable are assumed to be pure. Therefore, case classes can also be pure. These predictons remove most remaining unused bindings in the run/typelevel.scala test.
Don't issue a "pure expression does nothing in statement position" for self and super constructor calls. They are conceptually unit-returning in this position anyway.
Methods with default arguments in traits generate defender methods, so the trait cannot be a pure interface. Checking def kinds earlier in Namer rather than Typer exhibited that special case.
Constructors (and potentially, in the future, other methods) can have the stable flag set. We need to adapt ExtractAPI to this change.
Introduce UntypedSplice and make TreeCopiers and TreeMaps more regular to take splices into account.
- extend scheme to accumulators - no need anymore to define special tpd versions of maps and accumulators - no extra `typeMap` field needed in typeMap; instead the UntypedSplice case is handled directly.
Some refactorings so that it will become easier to generate untyped trees
- Mark splices in code under -print-debug - Print owners of all definitions under -print-debug-owners
Allow UNTYPEDSPLICE roots with TYPEDSPLICE subtrees. Trees between them are untyped.
- need to maintain Type mode bit for correct desugaring - need to pickle PatDefs directly since desugaring requires too much context.
Among others, adds new tag for EMPTYTREE. The serialized info is ambiguous otherwise. E.g. TypeBounds(Empty, B) and TypeBounds(B, Empty) serialize in the same way.
All necessary changes except for those in the Inliner itself.
The new inliner inlines both inline methods with fully-typed expansions and transparent methods with partially untyped extensions.
and more tests
This lets us way HList concat in the most trivial way possible.
This was the enclosing class instead of the method containing the annotation. Fixing this is surprisingly hard.
It used to be the position of the last inline call, which is not very helpful. Now it is the position of the first inline call, which started the recursive inline expansion.
Suppress recursive inlining of transparent methods unless the call is in redex position. Recursive calls to transparent methods under an if-then-else or match are not inlined unless the if-then-else or match is reduced away. Recursive calls on the rhs of a method definition, type definition, or lazy value definition are also not inlined. In principle, we should also suppress inlining calls in a by-name parameter position, but this is harder to do, first because we don't have always have the function type when deciding this (in typedApply of TransparentTyper), and second, because if the outer call with the by-name parameter is to a transparent method, we might decide after inlining that call that the by-name parameter is after all in redex position, so we should inline it recursively. But by that time, we might already have used the recusive call's type to instantiate type parameters of the enclosing function. If the recursive call is not inlined beforehand, these instantiations might lose precision. Therefore, it's safer to inline transparent calls in by-name parameters anyway.
Reduce terms equivalent to `new C(args).x` to `arg_i` if `x` refers to parameter `i` of class `C` and `C`'s constructor does not have side effects.
Allow types to be parameterized over terms. Parameterized type definitions are treated analogously to transparent methods.
c292f01
to
de2b06a
Compare
@OlivierBlanvillain I believe transparent type definitions are both simpler and more natural than Simpler: Types don't need to contain terms, only right hand side of type definitions can be terms. This is precisely the borderline between Scala's path-dependent type discipline and full dependent types. Full dependent types are fascinating, but the conceptual implications of embedding them in a full-spectrum language like Scala and the implementation difficulties will likely take years to master. Having tried this for a bit, I now am convinced we are currently only scratching the surface, there's much more complexity lurking underneath. So I feel this is too much to squeeze in for Scala 3. Maybe for a later version after we have a complete working implementation and have done lots of experiments. More natural: Dependently typed languages compute types directly (after all, types are just terms). The same happens here. By contrast, TypeOf feels more roundabout and is non-standard, as far as other languages go. There is precedent in Eiffel (like types) and I believe also D-lang but I am not sure how powerful the construct is there. It is certainly not central in either language's approach. A possible alternative could be do link |
Conceptually, yes, since it's exactly the same mechanism. But I am not sure that works yet. |
Here's what works currently (see trait Nat {
def toInt: Int
}
case object Z extends Nat {
transparent def toInt = 0
}
case class S[N <: Nat](n: N) extends Nat {
transparent def toInt = n.toInt + 1
}
object Test extends App {
type Z = Z.type
type IntOrString(x: Boolean) = if x then Int else String
val x: IntOrString(true) = 1
val x1: Int = x
val y: IntOrString(false) = ""
val y1: String = y
type ToNat(n: Int) <: Nat =
if n == 0 then Z
else S[ToNat(n - 1)]
val n0: ToNat(0) = Z
val n1: ToNat(1) = S(Z)
val n3: ToNat(3) = S(S(S(Z)))
val i0: 0 = n0.toInt
val i1: 1 = n1.toInt
val i3: 3 = n3.toInt
def ii: Int = 2
def nn: ToNat(ii) = S(S(Z)) // no expansion possible, since `ii` is of type `Int`.
val ii1: Int = nn.toInt
} |
One thing we should still explore is @gsps approach to applied termrefs. I.e let a singleton be not only a path but also an application of a pure singleton function to singleton arguments. That gives us more power, and lets us in particular express ML's applicative functors. But it does not fundamentally change the rules of the game&implementation since these applied termrefs are still types all the way down, no reference to a term is needed. I believe this would give us an interesting way to do "value-dependent typing". I.e. instead of having to map terms and decide their equality, make your operations transparent enough that all arguments that matter are singletons that can be compared structurally. |
I think my main concern here is that lots of interesting things will require lots of transparency and that could have a fairly severe effect on code size (if I'm understanding the underlying mechanism correctly, that is). |
I must say, these error message tests are probably the biggest productivity killer in working with dotty. Why do we have them? What do they accomplish? They certainly throw lost of spammers in the works.
Depending on what it's function part is.
Currently the translation of a hole depends on whether we are reading a type tree or a term tree, but that distinction is fickle.
True. But it would still be better than the present situation where we use implicit resolution to achieve the same effect, do you agree? Also, the problem can be solved by having a non-transparent method with a transparent type (it would require a type cast to convince the type checker that the type of the non-transparent method is what is computed). |
Awesome, then we are on the same page :) The idea with But this is only an implementation detail; a convenient way to encode things in the compiler that requires minimal updates to TASTY. Conceptually |
My takeaway from studding Richard Eisenberg's thesis is that the long term vision for dependent types in GHC/Haskell (what he calls Dependent Haskell) is similar to the To keep it concrete, we could try try to retrace the history of GHC extensions and see what would be their equivalent in Scala + transparent methods:
(The list comes from this talk) |
I believe we need to distinguish between TypeOf as an operator in source programs and
It is debatable whether we need Crucially, Having
even if the
as an expression over types, instead of |
Needed to accurately reflect parameterized type applications. We now use APPLYtpt/TYPEAPPLYtpt instead of APPLIEDtpt.
With the introduction of `Apply` nodes as types, we were facing an imbalance: Type appications distinguished between type trees and term trees by having two classes, `AppliedTypeTree` and `TypeApply`. Term applications did not. In the interest of keeping the number of tree classes low, we now use `TypeApply for both type and term trees that are applied to type tree arguments. The only exception is in the Tasty format, where we need to distinguish the two, because otherwise a hole followed by arguments would be ambiguous. Distinguishing between terms and types in the Tasty serialization format is done systematically everywhere. It has the advantage that it increases redundancy. But for the internal tree representation it's better to have as few classes as possible.
Yes, I completely agree that for current use cases this is a win. |
Converted flags to termflags before.
`toLambda` would always compute the denotation of a TypeRef, which makes it problematic as a widening operator, since it might force too much. It now widens only if the corresponding symbol is a TypeMethod.
Curried type methods open the possibility of partial applications. But this would give us full dependent typing, with types containing arbitrary terms (namely the arguments of the partial application).
0571834
to
cf82715
Compare
It seems that it is almost impossible to do transparent types without going "full-in" with term-dependent types, which is something I'd like to avoid for Scala 3.0. Furthermore, https://github.com/dotty-staging/dotty/blob/add-transparent/docs/docs/typelevel.md provides convincing evidence that transparent types might not be needed for the examples we have in mind so far. So I am closing this for the time being with the intent to work on #4616 instead. |
Based on #4616. This PR is about adding computed types, similar to type families in Haskell.
The idea is to allow type definition syntax like
A definition like that would be treated like a transparent method, but with a type as result instead of a value.