Skip to content

Sketch of possible typelevel extension of principled metaprogramming #3844

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 7 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions docs/docs/typelevel.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ Another tricky aspect is how to ensure that the right-hand side of
easy to see that we use the same recursive decomposition of `n` in
both cases and that the result types of each case correspond. But to
prove this correspondence in general would require some sort of
symbolic computation and it is not yet clear what the details of that
symbolic evaluation and it is not yet clear what the details of that
would be.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dependently-typed languages give one answer, Ωmega gives a slightly different one, SMT solvers yet another one. There can be reasons to ignore all that, but I’m curious which ones they are.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot to mention the application of SMT solvers, refinement types and Liquid Haskell/“Liquid Scala” (is that the name @gsps?), like in @gsps Scala’16 paper (https://conf.researchr.org/event/scala-2016/scala-2016-smt-based-checking-of-qualified-types-for-scala). I suspect refinement types have closer to industrial use than dependent types, since their typecheckers end up being “smarter” in many common scenarios.

Copy link
Contributor Author

@odersky odersky Jan 16, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the fact that you cite in one sentence three different and competing approaches is sufficient reason to be wary whether any of them is ready. If one works out satisfactorily, fine. But right now we do not know, and unless you provide me with proof that one of these approaches would work in all cases at good efficiency without restricting the language in any way, I will stay with my position that "it is not yet clear".


But there is a simpler way: We can _delay_ checking the precise result
Expand All @@ -136,16 +136,16 @@ information available at the inlining point. At that point the inlined
expansion of the macro is type-checked using the fully computed types.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, we would not be typechecking the macro, we would be writing a contract on it and enforcing it at “macro runtime” (where “runtime” means the runtime of the staged code, so it’s still while Dotty runs, but it’s not when defining the type function but when running it; as usual with staging, we don’t have just compile-time and runtime). This is an interesting idea and it has certain advantages, but there are well-understood tradeoffs between typechecking and (macro) runtime enforcements of contracts and we should make sure those tradeoffs are appropriate.

It’d be IMHO important to not talk about types because they suggest the wrong assumptions. I can already foresee an example puzzler: “my type function ‘typechecks’ but fails at this new call site”. More annoyingly, bugs in a type-level library might be detected by some user. Calling these “contracts” might reduce the confusion a bit (though it might not be enough); but would this be worth using?
But since many languages can type-check type-level functions, I’d like a reason to do instead research on something new and unproven.
Maybe dealing with a typechecker for type functions is too hard for our target users?
But are there really people who want type-level programming but are willing to admit the downsides of typecheckers (beyond me)?

And if those people exist, are they willing to give up separate typechecking of type-level functions to avoid those downsides? I’m not. Sometimes it’s too hard faking dependent types, as Chlipala will tell to anybody who listens — but then, just don’t fake them at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uh, the last paragraph was misleading — I just meant “looks like I’m not the target user”, which might be wrong. I’m not expecting Scala = Idris/Agda/..., I could rant about those langs for hours :-). But I’d like to tease out advantages and disadvantages as early as possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's OK to use the contract analogy. But for me these are still types. It's quite analogous to (predicate) refinement types a la Liquid Haskell. Here again we might check them early or late, or check them at run-time.


That still begs the question how we are going to typecheck the
_definition_ of a macro. The idea is to to approximate any computed
_definition_ of a macro. The idea is to approximate any computed
type at the definition that depends on unknown parameters by its upper
bound.

E.g., in the case of `toNat`, the type of the recursive call
`toNat(n - 1)` would be the upper bound `Nat` of the declared return type
`ToNat(n - 1)`. This gives `S[Nat]` as the type of the else
clause. The full type of the right hand side `Z | S[Nat]` would then
be checked against the upper approximation of the result type
`Nat`. This succeeds, so the definition is deemed type-correct.
be checked against the upper approximation of the declared result type
`ToNat(n)`, which is again `Nat`. This succeeds, so the definition is deemed type-correct.


### Second Step: Nicer Syntax for Type Definitions
Expand Down Expand Up @@ -379,7 +379,7 @@ We can define type-level Booleans analogously to Peano numbers:

Here's a type function that returns the result of comparing two `Nat`
types `X` and `Y`, returning `True` iff the peano number defined by
`X` is smaller than the peano number defined by `Y` and `False otherwise:
`X` is smaller than the peano number defined by `Y` and `Falsesy` otherwise:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Falsesy?


type < [X <: Nat, Y <: Nat] <: Bool = {
case X =:= Z, Y =:= S[type T1] => True
Expand Down Expand Up @@ -489,8 +489,8 @@ grammar](../internal/syntax.md).
| ‘'’ ‘[’ Type ‘]’

Def ::= ...
| ‘type’ ‘def’ DefSig ‘=’ ‘~’ SimpleExpr
| ‘type’ id [TypTypeParamClause] DefParamClauses ‘=’ Type
| ‘type’ ‘def’ DefSig [‘<:’ Type] ‘=’ ‘~’ SimpleExpr
| ‘type’ id [TypTypeParamClause] DefParamClauses [‘<:’ Type] ‘=’ Type

SimpleExpr ::= ...
| ‘{’ {Query ‘=>’ Block} ‘}’
Expand Down