diff --git a/docs/docs/internals/syntax.md b/docs/docs/internals/syntax.md index 6bbd74a65153..cb4567915eaa 100644 --- a/docs/docs/internals/syntax.md +++ b/docs/docs/internals/syntax.md @@ -146,6 +146,9 @@ FunType ::= FunArgTypes ‘=>’ Type FunArgTypes ::= InfixType | ‘(’ [ ‘[given]’ FunArgType {‘,’ FunArgType } ] ‘)’ | ‘(’ ‘[given]’ TypedFunParam {‘,’ TypedFunParam } ‘)’ +GivenArgs ::= InfixType + | ‘(’ [ FunArgType {‘,’ FunArgType } ] ‘)’ + | ‘(’ ‘val’ TypedFunParam {‘,’ ‘val’ TypedFunParam } ‘)’ TypedFunParam ::= id ‘:’ Type MatchType ::= InfixType `match` ‘{’ TypeCaseClauses ‘}’ InfixType ::= RefinedType {id [nl] RefinedType} InfixOp(t1, op, t2) diff --git a/docs/docs/reference/contextual-defaults/context-bounds.md b/docs/docs/reference/contextual-defaults/context-bounds.md new file mode 100644 index 000000000000..b80285827a50 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/context-bounds.md @@ -0,0 +1,30 @@ +--- +layout: doc-page +title: "Context Bounds" +--- + +## Context Bounds + +A context bound is a shorthand for expressing the common pattern of an implicit parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: +```scala +def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) +``` +A bound like `: Ord` on a type parameter `T` of a method or class indicates an implicit parameter `(given Ord[T])`. The implicit parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., +```scala +def f[T: C1 : C2, U: C3](x: T)(given y: U, z: V): R +``` +would expand to +```scala +def f[T, U](x: T)(given y: U, z: V)(given C1[T], C2[T], C3[U]): R +``` +Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. +```scala +def g[T <: B : C](x: T): R = ... +``` + +## Syntax + +``` +TypeParamBounds ::= [SubtypeBounds] {ContextBound} +ContextBound ::= ‘:’ Type +``` diff --git a/docs/docs/reference/contextual-defaults/conversions.md b/docs/docs/reference/contextual-defaults/conversions.md new file mode 100644 index 000000000000..2c8a49cbb3e2 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/conversions.md @@ -0,0 +1,75 @@ +--- +layout: doc-page +title: "Implicit Conversions" +--- + +Implicit conversions are defined by default instances of the `scala.Conversion` class. +This class is defined in package `scala` as follows: +```scala +abstract class Conversion[-T, +U] extends (T => U) +``` +For example, here is an implicit conversion from `String` to `Token`: +```scala +default for Conversion[String, Token] { + def apply(str: String): Token = new KeyWord(str) +} +``` +Using an alias this can be expressed more concisely as: +```scala +default for Conversion[String, Token] = new KeyWord(_) +``` +An implicit conversion is applied automatically by the compiler in three situations: + +1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. +2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. +3. In an application `e.m(args)` with `e` of type `T`, if `T` does define + some member(s) named `m`, but none of these members can be applied to the arguments `args`. + +In the first case, the compiler looks for a `scala.Conversion` default that maps +an argument of type `T` to type `S`. In the second and third +case, it looks for a `scala.Conversion` default that maps an argument of type `T` +to a type that defines a member `m` which can be applied to `args` if present. +If such a default `C` is found, the expression `e` is replaced by `C.apply(e)`. + +## Examples + +1. The `Predef` package contains "auto-boxing" conversions that map +primitive number types to subclasses of `java.lang.Number`. For instance, the +conversion from `Int` to `java.lang.Integer` can be defined as follows: +```scala +default int2Integer for Conversion[Int, java.lang.Integer] = + java.lang.Integer.valueOf(_) +``` + +2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. +```scala +object Completions { + + // The argument "magnet" type + enum CompletionArg { + case Error(s: String) + case Response(f: Future[HttpResponse]) + case Status(code: Future[StatusCode]) + } + object CompletionArg { + + // conversions defining the possible arguments to pass to `complete` + // these always come with CompletionArg + // They can be invoked explicitly, e.g. + // + // CompletionArg.fromStatusCode(statusCode) + + default fromString for Conversion[String, CompletionArg] = Error(_) + default fromFuture for Conversion[Future[HttpResponse], CompletionArg] = Response(_) + default fromStatusCode for Conversion[Future[StatusCode], CompletionArg] = Status(_) + } + import CompletionArg._ + + def complete[T](arg: CompletionArg) = arg match { + case Error(s) => ... + case Response(f) => ... + case Status(code) => ... + } +} +``` +This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-defaults/default-imports.md b/docs/docs/reference/contextual-defaults/default-imports.md new file mode 100644 index 000000000000..5af879f21bc0 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/default-imports.md @@ -0,0 +1,118 @@ +--- +layout: doc-page +title: "Default Imports" +--- + +A special form of import wildcard selector is used to import default. Example: +```scala +object A { + class TC + default tc of TC + def f(given TC) = ??? +} +object B { + import A._ + import A.{default _} +} +``` +In the code above, the `import A._` clause of object `B` will import all members +of `A` _except_ the default `tc`. Conversely, the second import `import A.{default _}` +will import _only_ that default. The two import clauses can also be merged into one: +```scala +object B + import A.{default _, _} +``` + +Generally, a normal wildcard selector `_` brings all definitions other than defaults or extensions into scope +whereas a `default _` selector brings all defaults (including those resulting from extensions) into scope. + +There are two main benefits arising from these rules: + + - It is made clearer where defaults in scope are coming from. + In particular, it is not possible to hide imported defaults in a long list of regular wildcard imports. + - It enables importing all defaults + without importing anything else. This is particularly important since defaults + can be anonymous, so the usual recourse of using named imports is not + practical. + +### Importing By Type + +Since defaults can be anonymous it is not always practical to import them by their name, and wildcard imports are typically used instead. By-type imports provide a more specific alternative to wildcard imports, which makes it clearer what is imported. Example: + +```scala +import A.{default TC} +``` +This imports any default in `A` that has a type which conforms to `TC`. Importing defaults of several types `T1,...,Tn` +is expressed by multiple `default` selectors. +``` +import A.{default T1, ..., default Tn} +``` +Importing all defaults of a parameterized type is expressed by wildcard arguments. +For instance, assuming the object +```scala +object Instances { + default intOrd for Ordering[Int] + default [T: Ordering] listOrd for Ordering[List[T]] + default ec for ExecutionContext = ... + default im for Monoid[Int] +} +``` +the import +```scala +import Instances.{default Ordering[?], default ExecutionContext} +``` +would import the `intOrd`, `listOrd`, and `ec` instances but leave out the `im` instance, since it fits none of the specified bounds. + +By-type imports can be mixed with by-name imports. If both are present in an import clause, by-type imports come last. For instance, the import clause +```scala +import Instances.{im, default Ordering[?]} +``` +would import `im`, `intOrd`, and `listOrd` but leave out `ec`. + + + +### Migration + +The rules for imports stated above have the consequence that a library +would have to migrate in lockstep with all its users from old style implicits and +normal imports to defaults and default imports. + +The following modifications avoid this hurdle to migration. + + 1. A `default` import selector also brings old style implicits into scope. So, in Scala 3.0 + an old-style implicit definition can be brought into scope either by a `_` or a `default _` wildcard selector. + + 2. In Scala 3.1, old-style implicits accessed through a `_` wildcard import will give a deprecation warning. + + 3. In some version after 3.1, old-style implicits accessed through a `_` wildcard import will give a compiler error. + +These rules mean that library users can use `default _` selectors to access old-style implicits in Scala 3.0, +and will be gently nudged and then forced to do so in later versions. Libraries can then switch to +representation clauses once their user base has migrated. + +### Syntax + +``` +Import ::= ‘import’ ImportExpr {‘,’ ImportExpr} +ImportExpr ::= StableId ‘.’ ImportSpec +ImportSpec ::= id + | ‘_’ + | ‘{’ ImportSelectors) ‘}’ +ImportSelectors ::= id [‘=>’ id | ‘=>’ ‘_’] [‘,’ ImportSelectors] + | WildCardSelector {‘,’ WildCardSelector} +WildCardSelector ::= ‘_' + | ‘default’ (‘_' | InfixType) +Export ::= ‘export’ ImportExpr {‘,’ ImportExpr} +``` \ No newline at end of file diff --git a/docs/docs/reference/contextual-defaults/default-params.md b/docs/docs/reference/contextual-defaults/default-params.md new file mode 100644 index 000000000000..f92ac3c2a779 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/default-params.md @@ -0,0 +1,118 @@ +--- +layout: doc-page +title: "Implicit Parameters" +--- + +Functional programming tends to express most dependencies as simple function parameterization. +This is clean and powerful, but it sometimes leads to functions that take many parameters and +call trees where the same value is passed over and over again in long call chains to many +functions. Implicit parameters can help here since they enable the compiler to synthesize +repetitive arguments instead of the programmer having to write them explicitly. + +For example, with the [default instances](./defaults.md) defined previously, +a maximum function that works for any arguments for which an ordering exists can be defined as follows: +```scala +def max[T](x: T, y: T)(ord: Ord[T]): T = + if (ord.compare(x, y) < 0) y else x +``` +Here, `ord` is an _implicit parameter_ introduced with a `given` clause. +The `max` method can be applied as follows: +```scala +max(2, 3)(default intOrd) +``` +The `(given intOrd)` part passes `intOrd` as an argument for the `ord` parameter. But the point of +implicit parameters is that this argument can also be left out (and it usually is). So the following +applications are equally valid: +```scala +max(2, 3) +max(List(1, 2, 3), Nil) +``` + +def foo(Int?, y: Int = 2) +foo(_) + +## Anonymous Given Clauses + +In many situations, the name of an implicit parameter need not be +mentioned explicitly at all, since it is used only in synthesized arguments for +other implicit parameters. In that case one can avoid defining a parameter name +and just provide its type. Example: +```scala +def maximum[T](xs: List[T])(given Ord[T]): T = + xs.reduceLeft(max) +``` +`maximum` takes an implicit parameter of type `Ord` only to pass it on as an +inferred argument to `max`. The name of the parameter is left out. + +Generally, implicit parameters may be defined either as a full parameter list `(given p_1: T_1, ..., p_n: T_n)` or just as a sequence of types `(given T_1, ..., T_n)`. Vararg implicit parameters are not supported. + +## Inferring Complex Arguments + +Here are two other methods that have an implicit parameter of type `Ord[T]`: +```scala +def descending[T](given asc: Ord[T]): Ord[T] = new Ord[T] { + def compare(x: T, y: T) = asc.compare(y, x) +} + +def minimum[T](xs: List[T])(given Ord[T]) = + maximum(xs)(given descending) +``` +The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. +With this setup, the following calls are all well-formed, and they all normalize to the last one: +```scala +minimum(xs) +maximum(xs)(given descending) +maximum(xs)(given descending(given listOrd)) +maximum(xs)(given descending(given listOrd(given intOrd))) +``` + +## Multiple Given Clauses + +There can be several implicit parameter clauses in a definition and implicit parameter clauses can be freely +mixed with normal ones. Example: +```scala +def f(u: Universe)(given ctx: u.Context)(given s: ctx.Symbol, k: ctx.Kind) = ... +``` +Multiple given clauses are matched left-to-right in applications. Example: +```scala +object global extends Universe { type Context = ... } +default ctx for global.Context { type Symbol = ...; type Kind = ... } +default sym for ctx.Symbol +default kind for ctx.Kind +``` +Then the following calls are all valid (and normalize to the last one) +```scala +f +f(global) +f(global)(given ctx) +f(global)(given ctx)(given sym, kind) +``` +But `f(global)(given sym, kind)` would give a type error. + +## Summoning Instances + +The method `summon` in `Predef` returns the default of a specific type. For example, +the default for `Ord[List[Int]]` is produced by +```scala +summon[Ord[List[Int]]] // reduces to listOrd(given intOrd) +``` +The `summon` method is simply defined as the (non-widening) identity function over an implicit parameter. +```scala +def summon[T](given x: T): x.type = x +``` + +## Syntax + +Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](../../internals/syntax.md). +``` +ClsParamClauses ::= ... + | {ClsParamClause} {GivenClsParamClause} +GivenClsParamClause ::= ‘(’ ‘given’ (ClsParams | GivenTypes) ‘)’ +DefParamClauses ::= ... + | {DefParamClause} {GivenParamClause} +GivenParamClause ::= ‘(’ ‘given’ (DefParams | GivenTypes) ‘)’ +GivenTypes ::= AnnotType {‘,’ AnnotType} + +ParArgumentExprs ::= ... + | ‘(’ ‘given’ ExprsInParens ‘)’ +``` diff --git a/docs/docs/reference/contextual-defaults/defaults.md b/docs/docs/reference/contextual-defaults/defaults.md new file mode 100644 index 000000000000..cbfb58f29253 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/defaults.md @@ -0,0 +1,88 @@ +--- +layout: doc-page +title: "Defaults" +--- + +Defaults define "canonical" values of certain types +that serve for synthesizing arguments to [implicit parameters](./given-clauses.md). Example: + +```scala +trait Ord[T] { + def compare(x: T, y: T): Int + def (x: T) < (y: T) = compare(x, y) < 0 + def (x: T) > (y: T) = compare(x, y) > 0 +} + +default intOrd for Ord[Int] { + def compare(x: Int, y: Int) = + if (x < y) -1 else if (x > y) +1 else 0 +} + +default listOrd[T](given ord: Ord[T]) for Ord[List[T]] { + + def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match + case (Nil, Nil) => 0 + case (Nil, _) => -1 + case (_, Nil) => +1 + case (x :: xs1, y :: ys1) => + val fst = ord.compare(x, y) + if (fst != 0) fst else compare(xs1, ys1) +} +``` +This code defines a trait `Ord` with two defaults. `intOrd` defines +a default for `Ord[Int]` whereas `listOrd[T]` defines defaults +for `Ord[List[T]]` for all types `T` that come with a default for `Ord[T]` +themselves. The `(given ord: Ord[T])` clause in `listOrd` defines a condition: There must be a +default of type `Ord[T]` so that a defaults of type `List[Ord[T]]` can +be synthesized. Such conditions are expanded by the compiler to implicit +parameters, which are explained in the [next section](./given-clauses.md). + +## Anonymous Defaults + +The name of a default can be left out. So the definitions +of the last section can also be expressed like this: +```scala +default for Ord[Int] { ... } +default [T](given Ord[T]) for Ord[List[T]] { ... } +``` +If the name of a default is missing, the compiler will synthesize a name from +the implemented type(s). + +## Alias Defaults + +An alias can be used to define a default that is equal to some expression. E.g.: +```scala +default global for ExecutionContext = new ForkJoinPool() +``` +This creates a default `global` of type `ExecutionContext` that resolves to the right +hand side `new ForkJoinPool()`. +The first time `global` is accessed, a new `ForkJoinPool` is created, which is then +returned for this and all subsequent accesses to `global`. + +Alias defaults can be anonymous, e.g. +```scala +default for Position = enclosingTree.position +default (given outer: Context) for Context = outer.withOwner(currentOwner) +``` +An alias default can have type parameters and implicit parameters just like any other default, +but it can only implement a single type. + +## Default Initialization + +A default without type or implicit parameters is initialized on-demand, the first +time it is accessed. If a default has type or implicit parameters, a fresh instance +is created for each reference. + +## Syntax + +Here is the new syntax for defaults, seen as a delta from the [standard context free syntax of Scala 3](../../internals/syntax.md). +``` +TmplDef ::= ... + | ‘default’ DefaultDef +DefaultDef ::= DefaultSig ‘for’ [‘_’ ‘<:’] Type ‘=’ Expr + | DefaultSig ‘for’ [ConstrApp {‘,’ ConstrApp }] [TemplateBody] +DefaultSig ::= [id] [DefTypeParamClause] {GivenParamClause} +GivenParamClause ::= ‘(’ ‘given’ (DefParams | GivenTypes) ‘)’ +GivenTypes ::= Type {‘,’ Type} +``` +The identifier `id` can be omitted only if some types are implemented or the template body defines at least one extension method. diff --git a/docs/docs/reference/contextual-defaults/derivation.md b/docs/docs/reference/contextual-defaults/derivation.md new file mode 100644 index 000000000000..78cae750d556 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/derivation.md @@ -0,0 +1,399 @@ +--- +layout: doc-page +title: Type Class Derivation +--- + +Type class derivation is a way to automatically generate default instances for type classes which satisfy some simple +conditions. A type class in this sense is any trait or class with a type parameter determining the type being operated +on. Common examples are `Eq`, `Ordering`, or `Show`. For example, given the following `Tree` algebraic data type +(ADT), + +```scala +enum Tree[T] derives Eq, Ordering, Show { + case Branch[T](left: Tree[T], right: Tree[T]) + case Leaf[T](elem: T) +} +``` + +The `derives` clause generates the following default instances for the `Eq`, `Ordering` and `Show` type classes in the +companion object of `Tree`, + +```scala +default [T: Eq] for Eq[Tree[T]] = Eq.derived +default [T: Ordering] for Ordering[Tree] = Ordering.derived +default [T: Show] for Show[Tree] = Show.derived +``` + +We say that `Tree` is the _deriving type_ and that the `Eq`, `Ordering` and `Show` instances are _derived instances_. + +### Types supporting `derives` clauses + +All data types can have a `derives` clause. This document focuses primarily on data types which also have a default +of the `Mirror` type class available. Defaults of the `Mirror` type class are generated automatically by the compiler +for, + ++ enums and enum cases ++ case classes and case objects ++ sealed classes or traits that have only case classes and case objects as children + +`Mirror` type class instances provide information at the type level about the components and labelling of the type. +They also provide minimal term level infrastructure to allow higher level libraries to provide comprehensive +derivation support. + +```scala +sealed trait Mirror { + + /** the type being mirrored */ + type MirroredType + + /** the type of the elements of the mirrored type */ + type MirroredElemTypes + + /** The mirrored *-type */ + type MirroredMonoType + + /** The name of the type */ + type MirroredLabel <: String + + /** The names of the elements of the type */ + type MirroredElemLabels <: Tuple +} + +object Mirror { + /** The Mirror for a product type */ + trait Product extends Mirror { + + /** Create a new instance of type `T` with elements taken from product `p`. */ + def fromProduct(p: scala.Product): MirroredMonoType + } + + trait Sum extends Mirror { self => + /** The ordinal number of the case class of `x`. For enums, `ordinal(x) == x.ordinal` */ + def ordinal(x: MirroredMonoType): Int + } +} +``` + +Product types (i.e. case classes and objects, and enum cases) have mirrors which are subtypes of `Mirror.Product`. Sum +types (i.e. sealed class or traits with product children, and enums) have mirrors which are subtypes of `Mirror.Sum`. + +For the `Tree` ADT from above the following `Mirror` instances will be automatically provided by the compiler, + +```scala +// Mirror for Tree +Mirror.Sum { + type MirroredType = Tree + type MirroredElemTypes[T] = (Branch[T], Leaf[T]) + type MirroredMonoType = Tree[_] + type MirroredLabels = "Tree" + type MirroredElemLabels = ("Branch", "Leaf") + + def ordinal(x: MirroredMonoType): Int = x match { + case _: Branch[_] => 0 + case _: Leaf[_] => 1 + } +} + +// Mirror for Branch +Mirror.Product { + type MirroredType = Branch + type MirroredElemTypes[T] = (Tree[T], Tree[T]) + type MirroredMonoType = Branch[_] + type MirroredLabels = "Branch" + type MirroredElemLabels = ("left", "right") + + def fromProduct(p: Product): MirroredMonoType = + new Branch(...) +} + +// Mirror for Leaf +Mirror.Product { + type MirroredType = Leaf + type MirroredElemTypes[T] = Tuple1[T] + type MirroredMonoType = Leaf[_] + type MirroredLabels = "Leaf" + type MirroredElemLabels = Tuple1["elem"] + + def fromProduct(p: Product): MirroredMonoType = + new Leaf(...) +} +``` + +Note the following properties of `Mirror` types, + ++ Properties are encoded using types rather than terms. This means that they have no runtime footprint unless used and + also that they are a compile time feature for use with Dotty's metaprogramming facilities. ++ The kinds of `MirroredType` and `MirroredElemTypes` match the kind of the data type the mirror is an instance for. + This allows `Mirrors` to support ADTs of all kinds. ++ There is no distinct representation type for sums or products (ie. there is no `HList` or `Coproduct` type as in + Scala 2 versions of shapeless). Instead the collection of child types of a data type is represented by an ordinary, + possibly parameterized, tuple type. Dotty's metaprogramming facilities can be used to work with these tuple types + as-is, and higher level libraries can be built on top of them. ++ The methods `ordinal` and `fromProduct` are defined in terms of `MirroredMonoType` which is the type of kind-`*` + which is obtained from `MirroredType` by wildcarding its type parameters. + +### Type classes supporting automatic deriving + +A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The +signature and implementation of a `derived` method for a type class `TC[_]` are arbitrary but it is typically of the +following form, + +```scala +def derived[T](given Mirror.Of[T]): TC[T] = ... +``` + +That is, the `derived` method takes an implicit parameter of (some subtype of) type `Mirror` which defines the shape of +the deriving type `T`, and computes the type class implementation according to that shape. This is all that the +provider of an ADT with a `derives` clause has to know about the derivation of a type class instance. + +Note that `derived` methods may have given `Mirror` arguments indirectly (e.g. by having a given argument which in turn +has a given `Mirror`, or not at all (e.g. they might use some completely different user-provided mechanism, for +instance using Dotty macros or runtime reflection). We expect that (direct or indirect) `Mirror` based implementations +will be the most common and that is what this document emphasises. + +Type class authors will most likely use higher level derivation or generic programming libraries to implement +`derived` methods. An example of how a `derived` method might be implemented using _only_ the low level facilities +described above and Dotty's general metaprogramming features is provided below. It is not anticipated that type class +authors would normally implement a `derived` method in this way, however this walkthrough can be taken as a guide for +authors of the higher level derivation libraries that we expect typical type class authors will use (for a fully +worked out example of such a library, see [shapeless 3](https://github.com/milessabin/shapeless/tree/shapeless-3)). + +#### How to write a type class `derived` method using low level mechanisms + +The low-level method we will use to implement a type class `derived` method in this example exploits three new +type-level constructs in Dotty: inline methods, inline matches, and implicit searches via `summonFrom`. Given this definition of the +`Eq` type class, + + +```scala +trait Eq[T] { + def eqv(x: T, y: T): Boolean +} +``` + +we need to implement a method `Eq.derived` on the companion object of `Eq` that produces a default for `Eq[T]` given +a `Mirror[T]`. Here is a possible implementation, + +```scala +inline default derived[T](given m: Mirror.Of[T]) for Eq[T] = { + val elemInstances = summonAll[m.MirroredElemTypes] // (1) + inline m match { // (2) + case s: Mirror.SumOf[T] => eqSum(s, elemInstances) + case p: Mirror.ProductOf[T] => eqProduct(p, elemInstances) + } +} +``` + +Note that `derived` is defined as an `inline` default. This means that the `derived` method will be expanded at +call sites (for instance the compiler generated instance definitions in the companion objects of ADTs which have a +`derived Eq` clause), and also that it can be used recursively if necessary, to compute instances for children. + +The body of this method (1) first materializes the `Eq` instances for all the child types of type the instance is +being derived for. This is either all the branches of a sum type or all the fields of a product type. The +implementation of `summonAll` is `inline` and uses Dotty's `summonFrom` construct to collect the instances as a +`List`, + +```scala +inline def summonAll[T]: T = summonFrom { + case t: T => t +} + +inline def summonAll[T <: Tuple]: List[Eq[_]] = inline erasedValue[T] match { + case _: Unit => Nil + case _: (t *: ts) => summon[Eq[t]] :: summonAll[ts] +} +``` + +with the instances for children in hand the `derived` method uses an `inline match` to dispatch to methods which can +construct instances for either sums or products (2). Note that because `derived` is `inline` the match will be +resolved at compile-time and only the left-hand side of the matching case will be inlined into the generated code with +types refined as revealed by the match. + +In the sum case, `eqSum`, we use the runtime `ordinal` values of the arguments to `eqv` to first check if the two +values are of the same subtype of the ADT (3) and then, if they are, to further test for equality based on the `Eq` +instance for the appropriate ADT subtype using the auxiliary method `check` (4). + +```scala +def eqSum[T](s: Mirror.SumOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = { + val ordx = s.ordinal(x) // (3) + (s.ordinal(y) == ordx) && check(elems(ordx))(x, y) // (4) + } + } +``` + +In the product case, `eqProduct` we test the runtime values of the arguments to `eqv` for equality as products based +on the `Eq` instances for the fields of the data type (5), + +```scala +def eqProduct[T](p: Mirror.ProductOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = + iterator(x).zip(iterator(y)).zip(elems.iterator).forall { // (5) + case ((x, y), elem) => check(elem)(x, y) + } + } +``` + +Pulling this all together we have the following complete implementation, + +```scala +import scala.deriving._ +import scala.compiletime.{erasedValue, summonFrom} + +inline def summon[T]: T = summonFrom { + case t: T => t +} + +inline def summonAll[T <: Tuple]: List[Eq[_]] = inline erasedValue[T] match { + case _: Unit => Nil + case _: (t *: ts) => summon[Eq[t]] :: summonAll[ts] +} + +trait Eq[T] { + def eqv(x: T, y: T): Boolean +} + +object Eq { + default for Eq[Int] { + def eqv(x: Int, y: Int) = x == y + } + + def check(elem: Eq[_])(x: Any, y: Any): Boolean = + elem.asInstanceOf[Eq[Any]].eqv(x, y) + + def iterator[T](p: T) = p.asInstanceOf[Product].productIterator + + def eqSum[T](s: Mirror.SumOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = { + val ordx = s.ordinal(x) + (s.ordinal(y) == ordx) && check(elems(ordx))(x, y) + } + } + + def eqProduct[T](p: Mirror.ProductOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = + iterator(x).zip(iterator(y)).zip(elems.iterator).forall { + case ((x, y), elem) => check(elem)(x, y) + } + } + + inline default derived[T](given m: Mirror.Of[T]) for Eq[T] = { + val elemInstances = summonAll[m.MirroredElemTypes] + inline m match { + case s: Mirror.SumOf[T] => eqSum(s, elemInstances) + case p: Mirror.ProductOf[T] => eqProduct(p, elemInstances) + } + } +} +``` + +we can test this relative to a simple ADT like so, + +```scala +enum Opt[+T] derives Eq { + case Sm(t: T) + case Nn +} + +object Test extends App { + import Opt._ + val eqoi = summon[Eq[Opt[Int]]] + assert(eqoi.eqv(Sm(23), Sm(23))) + assert(!eqoi.eqv(Sm(23), Sm(13))) + assert(!eqoi.eqv(Sm(23), Nn)) +} +``` + +In this case the code that is generated by the inline expansion for the derived `Eq` instance for `Opt` looks like the +following, after a little polishing, + +```scala +default derived$Eq[T] for (eqT: Eq[T]) => Eq[Opt[T]] = + eqSum(summon[Mirror[Opt[T]]], + List( + eqProduct(summon[Mirror[Sm[T]]], List(summon[Eq[T]])) + eqProduct(summon[Mirror[Nn.type]], Nil) + ) + ) +``` + +Alternative approaches can be taken to the way that `derived` methods can be defined. For example, more aggressively +inlined variants using Dotty macros, whilst being more involved for type class authors to write than the example +above, can produce code for type classes like `Eq` which eliminate all the abstraction artefacts (eg. the `Lists` of +child instances in the above) and generate code which is indistinguishable from what a programmer might write by hand. +As a third example, using a higher level library such as shapeless the type class author could define an equivalent +`derived` method as, + +```scala +default eqSum[A](given inst: => K0.CoproductInstances[Eq, A]) for Eq[A] { + def eqv(x: A, y: A): Boolean = inst.fold2(x, y)(false)( + [t] => (eqt: Eq[t], t0: t, t1: t) => eqt.eqv(t0, t1) + ) +} + +default eqProduct[A](given inst: K0.ProductInstances[Eq, A]) for Eq[A] { + def eqv(x: A, y: A): Boolean = inst.foldLeft2(x, y)(true: Boolean)( + [t] => (acc: Boolean, eqt: Eq[t], t0: t, t1: t) => Complete(!eqt.eqv(t0, t1))(false)(true) + ) +} + + +inline def derived[A](given gen: K0.Generic[A]): Eq[A] = gen.derive(eqSum, eqProduct) +``` + +The framework described here enables all three of these approaches without mandating any of them. + +### Deriving instances elsewhere + +Sometimes one would like to derive a type class instance for an ADT after the ADT is defined, without being able to +change the code of the ADT itself. To do this, simply define an instance using the `derived` method of the type class +as right-hand side. E.g, to implement `Ordering` for `Option` define, + +```scala +default [T: Ordering] for Ordering[Option[T]] = Ordering.derived +``` + +Assuming the `Ordering.derived` method has a given parameter of type `Mirror[T]` it will be satisfied by the +compiler generated `Mirror` instance for `Option` and the derivation of the instance will be expanded on the right +hand side of this definition in the same way as an instance defined in ADT companion objects. + +### Syntax + +``` +Template ::= InheritClauses [TemplateBody] +EnumDef ::= id ClassConstr InheritClauses EnumBody +InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] +ConstrApps ::= ConstrApp {‘with’ ConstrApp} + | ConstrApp {‘,’ ConstrApp} +``` + +### Discussion + +This type class derivation framework is intentionally very small and low-level. There are essentially two pieces of +infrastructure in compiler-generated `Mirror` instances, + ++ type members encoding properties of the mirrored types. ++ a minimal value level mechanism for working generically with terms of the mirrored types. + +The `Mirror` infrastructure can be seen as an extension of the existing `Product` infrastructure for case classes: +typically `Mirror` types will be implemented by the ADTs companion object, hence the type members and the `ordinal` or +`fromProduct` methods will be members of that object. The primary motivation for this design decision, and the +decision to encode properties via types rather than terms was to keep the bytecode and runtime footprint of the +feature small enough to make it possible to provide `Mirror` instances _unconditionally_. + +Whilst `Mirrors` encode properties precisely via type members, the value level `ordinal` and `fromProduct` are +somewhat weakly typed (because they are defined in terms of `MirroredMonoType`) just like the members of `Product`. +This means that code for generic type classes has to ensure that type exploration and value selection proceed in +lockstep and it has to assert this conformance in some places using casts. If generic type classes are correctly +written these casts will never fail. + +As mentioned, however, the compiler-provided mechansim is intentionally very low level and it is anticipated that +higher level type class derivation and generic programming libraries will build on this and Dotty's other +metaprogramming facilities to hide these low-level details from type class authors and general users. Type class +derivation in the style of both shapeless and Magnolia are possible (a prototype of shapeless 3, which combines +aspects of both shapeless 2 and Magnolia has been developed alongside this language feature) as is a more aggressively +inlined style, supported by Dotty's new quote/splice macro and inlining facilities. diff --git a/docs/docs/reference/contextual-defaults/extension-methods.md b/docs/docs/reference/contextual-defaults/extension-methods.md new file mode 100644 index 000000000000..f73925828df4 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/extension-methods.md @@ -0,0 +1,183 @@ +--- +layout: doc-page +title: "Extension Methods" +--- + +Extension methods allow one to add methods to a type after the type is defined. Example: + +```scala +case class Circle(x: Double, y: Double, radius: Double) + +def (c: Circle).circumference: Double = c.radius * math.Pi * 2 +``` + +Like regular methods, extension methods can be invoked with infix `.`: + +```scala +val circle = Circle(0, 0, 1) +circle.circumference +``` + +### Translation of Extension Methods + +Extension methods are methods that have a parameter clause in front of the defined +identifier. They translate to methods where the leading parameter section is moved +to after the defined identifier. So, the definition of `circumference` above translates +to the plain method, and can also be invoked as such: +```scala +def circumference(c: Circle): Double = c.radius * math.Pi * 2 + +assert(circle.circumference == circumference(circle)) +``` + +### Translation of Calls to Extension Methods + +When is an extension method applicable? There are two possibilities. + + - An extension method is applicable if it is visible under a simple name, by being defined + or inherited or imported in a scope enclosing the application. + - An extension method is applicable if it is a member of some default instance that is eligible at the point of the application. + +As an example, consider an extension method `longestStrings` on `Seq[String]` defined in a trait `StringSeqOps`. + +```scala +trait StringSeqOps { + def (xs: Seq[String]).longestStrings = { + val maxLength = xs.map(_.length).max + xs.filter(_.length == maxLength) + } +} +``` +We can make the extension method available by defining a default for `StringSeqOps`, like this: +```scala +default ops1 for StringSeqOps +``` +Then +```scala +List("here", "is", "a", "list").longestStrings +``` +is legal everywhere `ops1` is available. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. + +```scala +object ops2 extends StringSeqOps +import ops2.longestStrings +List("here", "is", "a", "list").longestStrings +``` +The precise rules for resolving a selection to an extension method are as follows. + +Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, +and where `T` is the expected type. The following two rewritings are tried in order: + + 1. The selection is rewritten to `m[Ts](e)`. + 2. If the first rewriting does not typecheck with expected type `T`, and there is a default `d` + in either the current scope or in the implicit scope of `T` such that `d` defines an extension + method named `m`, then selection is expanded to `d.m[Ts](e)`. + This second rewriting is attempted at the time where the compiler also tries an implicit conversion + from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. + +So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided +`circle` has type `Circle` and `CircleOps` is eligible (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). + +### Operators + +The extension method syntax also applies to the definition of operators. +In this case it is allowed and preferable to omit the period between the leading parameter list +and the operator. In each case the definition syntax mirrors the way the operator is applied. +Examples: +```scala +def (x: String) < (y: String) = ... +def (x: Elem) +: (xs: Seq[Elem]) = ... +def (x: Number) min (y: Number) = ... + +"ab" < "c" +1 +: List(2, 3) +x min 3 +``` +The three definitions above translate to +```scala +def < (x: String)(y: String) = ... +def +: (xs: Seq[Elem])(x: Elem) = ... +def min(x: Number)(y: Number) = ... +``` +Note the swap of the two parameters `x` and `xs` when translating +the right-binding operator `+:` to an extension method. This is analogous +to the implementation of right binding operators as normal methods. + +### Generic Extensions + +The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: + +```scala +def [T](xs: List[T]) second = + xs.tail.head + +def [T](xs: List[List[T]]) flattened = + xs.foldLeft[List[T]](Nil)(_ ++ _) + +def [T: Numeric](x: T) + (y: T): T = + summon[Numeric[T]].plus(x, y) +``` + +If an extension method has type parameters, they come immediately after the `def` and are followed by the extended parameter. When calling a generic extension method, any explicitly given type arguments follow the method name. So the `second` method can be instantiated as follows: +```scala +List(1, 2, 3).second[Int] +``` +### Collective Extensions + +A collective extension defines one or more concrete methods that have the same type parameters +and prefix parameter. Examples: + +```scala +extension stringOps for (xs: Seq[String]) { + def longestStrings: Seq[String] = { + val maxLength = xs.map(_.length).max + xs.filter(_.length == maxLength) + } +} + +extension listOps for [T](xs: List[T]) { + def second = xs.tail.head + def third: T = xs.tail.tail.head +} + +extension for [T](xs: List[T])(given Ordering[T]) { + def largest(n: Int) = xs.sorted.takeRight(n) +} +``` +If an extension is anonymous (as in the last clause), its name is synthesized from the name of the first defined extension method. + +The extensions above are equivalent to the following default instances where the implemented parent is `AnyRef` and the leading parameters are repeated in each extension method definition: +```scala +default stringOps for AnyRef { + def (xs: Seq[String]).longestStrings: Seq[String] = { + val maxLength = xs.map(_.length).max + xs.filter(_.length == maxLength) + } +} +default listOps for AnyRef { + def [T](xs: List[T]) second = xs.tail.head + def [T](xs: List[T]) third: T = xs.tail.tail.head +} +default extension_largest_List_T for AnyRef { + def [T](xs: List[T]) largest (given Ordering[T])(n: Int) = + xs.sorted.takeRight(n) +} +``` + +`extension` and `of` are soft keywords. They can also be used as a regular identifiers. + +### Syntax + +Here are the syntax changes for extension methods and collective extensions relative +to the [current syntax](../../internals/syntax.md). `extension` is a soft keyword, recognized only +in tandem with `of`. It can be used as an identifier everywhere else. +``` +DefSig ::= ... + | ExtParamClause [nl] [‘.’] id DefParamClauses +ExtParamClause ::= [DefTypeParamClause] ‘(’ DefParam ‘)’ +TmplDef ::= ... + | ‘extension’ ExtensionDef +ExtensionDef ::= [id] ‘for’ ExtParamClause {GivenParamClause} [nl] ExtMethods +ExtMethods ::= ‘{’ ‘def’ DefDef {semi ‘def’ DefDef} ‘}’ +``` + diff --git a/docs/docs/reference/contextual-defaults/given-clauses.md b/docs/docs/reference/contextual-defaults/given-clauses.md new file mode 100644 index 000000000000..69f41382daf4 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/given-clauses.md @@ -0,0 +1,115 @@ +--- +layout: doc-page +title: "Implicit Parameters" +--- + +Functional programming tends to express most dependencies as simple function parameterization. +This is clean and powerful, but it sometimes leads to functions that take many parameters and +call trees where the same value is passed over and over again in long call chains to many +functions. Implicit parameters can help here since they enable the compiler to synthesize +repetitive arguments instead of the programmer having to write them explicitly. + +For example, with the [default instances](./defaults.md) defined previously, +a maximum function that works for any arguments for which an ordering exists can be defined as follows: +```scala +def max[T](x: T, y: T)(given ord: Ord[T]): T = + if (ord.compare(x, y) < 0) y else x +``` +Here, `ord` is an _implicit parameter_ introduced with a `given` clause. +The `max` method can be applied as follows: +```scala +max(2, 3)(given intOrd) +``` +The `(given intOrd)` part passes `intOrd` as an argument for the `ord` parameter. But the point of +implicit parameters is that this argument can also be left out (and it usually is). So the following +applications are equally valid: +```scala +max(2, 3) +max(List(1, 2, 3), Nil) +``` + +## Anonymous Given Clauses + +In many situations, the name of an implicit parameter need not be +mentioned explicitly at all, since it is used only in synthesized arguments for +other implicit parameters. In that case one can avoid defining a parameter name +and just provide its type. Example: +```scala +def maximum[T](xs: List[T])(given Ord[T]): T = + xs.reduceLeft(max) +``` +`maximum` takes an implicit parameter of type `Ord` only to pass it on as an +inferred argument to `max`. The name of the parameter is left out. + +Generally, implicit parameters may be defined either as a full parameter list `(given p_1: T_1, ..., p_n: T_n)` or just as a sequence of types `(given T_1, ..., T_n)`. Vararg implicit parameters are not supported. + +## Inferring Complex Arguments + +Here are two other methods that have an implicit parameter of type `Ord[T]`: +```scala +def descending[T](given asc: Ord[T]): Ord[T] = new Ord[T] { + def compare(x: T, y: T) = asc.compare(y, x) +} + +def minimum[T](xs: List[T])(given Ord[T]) = + maximum(xs)(given descending) +``` +The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. +With this setup, the following calls are all well-formed, and they all normalize to the last one: +```scala +minimum(xs) +maximum(xs)(given descending) +maximum(xs)(given descending(given listOrd)) +maximum(xs)(given descending(given listOrd(given intOrd))) +``` + +## Multiple Given Clauses + +There can be several implicit parameter clauses in a definition and implicit parameter clauses can be freely +mixed with normal ones. Example: +```scala +def f(u: Universe)(given ctx: u.Context)(given s: ctx.Symbol, k: ctx.Kind) = ... +``` +Multiple given clauses are matched left-to-right in applications. Example: +```scala +object global extends Universe { type Context = ... } +default ctx for global.Context { type Symbol = ...; type Kind = ... } +default sym for ctx.Symbol +default kind for ctx.Kind +``` +Then the following calls are all valid (and normalize to the last one) +```scala +f +f(global) +f(global)(given ctx) +f(global)(given ctx)(given sym, kind) +``` +But `f(global)(given sym, kind)` would give a type error. + +## Summoning Instances + +The method `summon` in `Predef` returns the default of a specific type. For example, +the default for `Ord[List[Int]]` is produced by +```scala +summon[Ord[List[Int]]] // reduces to listOrd(given intOrd) +``` +The `summon` method is simply defined as the (non-widening) identity function over an implicit parameter. +```scala +def summon[T](given x: T): x.type = x +``` + +## Syntax + +Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](../../internals/syntax.md). +``` +ClsParamClauses ::= ... + | {ClsParamClause} {GivenClsParamClause} +GivenClsParamClause ::= ‘(’ ‘given’ (ClsParams | GivenTypes) ‘)’ +DefParamClauses ::= ... + | {DefParamClause} {GivenParamClause} +GivenParamClause ::= ‘(’ ‘given’ (DefParams | GivenTypes) ‘)’ +GivenTypes ::= AnnotType {‘,’ AnnotType} + +ParArgumentExprs ::= ... + | ‘(’ ‘given’ ExprsInParens ‘)’ +``` diff --git a/docs/docs/reference/contextual-defaults/implicit-by-name-parameters.md b/docs/docs/reference/contextual-defaults/implicit-by-name-parameters.md new file mode 100644 index 000000000000..5ad7f0aba8e2 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/implicit-by-name-parameters.md @@ -0,0 +1,65 @@ +--- +layout: doc-page +title: "Implicit By-Name Parameters" +--- + +Implicit parameters can be declared by-name to avoid a divergent inferred expansion. Example: + +```scala +trait Codec[T] { + def write(x: T): Unit +} + +default optionCodec[T](given ev: => Codec[T]) for Codec[Option[T]] { + def write(xo: Option[T]) = xo match { + case Some(x) => ev.write(x) + case None => + } +} + +val s = summon[Codec[Option[Int]]] + +s.write(Some(33)) +s.write(None) +``` +As is the case for a normal by-name parameter, the argument for the implicit parameter `ev` +is evaluated on demand. In the example above, if the option value `x` is `None`, it is +not evaluated at all. + +The synthesized argument for an implicit parameter is backed by a local val +if this is necessary to prevent an otherwise diverging expansion. + +The precise steps for synthesizing an argument for an implicit by-name parameter of type `=> T` are as follows. + + 1. Create a new default for `T`: + + ```scala + default lv for T = ??? + ``` + where `lv` is an arbitrary fresh name. + + 1. This default is not immediately available as candidate for argument inference (making it immediately available could result in a loop in the synthesized computation). But it becomes available in all nested contexts that look again for an argument to an implicit by-name parameter. + + 1. If this search succeeds with expression `E`, and `E` contains references to `lv`, replace `E` by + + + ```scala + { default lv for T = E; lv } + ``` + + Otherwise, return `E` unchanged. + +In the example above, the definition of `s` would be expanded as follows. + +```scala +val s = summon[Test.Codec[Option[Int]]]( + optionCodec[Int](intCodec) +) +``` + +No local default was generated because the synthesized argument is not recursive. + +### Reference + +For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) +and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-defaults/implicit-function-types-spec.md b/docs/docs/reference/contextual-defaults/implicit-function-types-spec.md new file mode 100644 index 000000000000..cda87bd33e54 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/implicit-function-types-spec.md @@ -0,0 +1,77 @@ +--- +layout: doc-page +title: "Implicit Function Types - More Details" +--- + +## Syntax + + Type ::= ... + | FunArgTypes ‘=>’ Typee + FunArgTypes ::= InfixType + | ‘(’ [ ‘[given]’ FunArgType {‘,’ FunArgType } ] ‘)’ + | ‘(’ ‘[given]’ TypedFunParam {‘,’ TypedFunParam } ‘)’ + Bindings ::= ‘(’ [[‘given’] Binding {‘,’ Binding}] ‘)’ + +Implicit function types associate to the right, e.g. +`(given S) => (given T) => U` is the same as `(given S) => ((given T) => U)`. + +## Implementation + +Implicit function types are shorthands for class types that define `apply` +methods with implicit parameters. Specifically, the `N`-ary function type +`T1, ..., TN => R` is a shorthand for the class type +`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: +```scala +package scala +trait ImplicitFunctionN[-T1 , ... , -TN, +R] { + def apply(given x1: T1 , ... , xN: TN): R +} +``` +Implicit function types erase to normal function types, so these classes are +generated on the fly for typechecking, but not realized in actual code. + +Implicit function literals `(given x1: T1, ..., xn: Tn) => e` map +implicit parameters `xi` of types `Ti` to the result of evaluating the expression `e`. +The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. + +If the expected type of the implicit function literal is of the form +`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and +the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti += Si` is assumed. If the expected type of the implicit function literal is +some other type, all implicit parameter types must be explicitly given, and the expected type of `e` is undefined. +The type of the implicit function literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened +type of `e`. `T` must be equivalent to a type which does not refer to any of +the implicit parameters `xi`. + +The implicit function literal is evaluated as the instance creation +expression +```scala +new scala.ImplicitFunctionN[T1, ..., Tn, T] { + def apply(given x1: T1, ..., xn: Tn): T = e +} +``` +An implicit parameter may also be a wildcard represented by an underscore `_`. In +that case, a fresh name for the parameter is chosen arbitrarily. + +Note: The closing paragraph of the +[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) +of Scala 2.12 is subsumed by implicit function types and should be removed. + +Implicit function literals `(given x1: T1, ..., xn: Tn) => e` are +automatically created for any expression `e` whose expected type is +`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is +itself a implicit function literal. This is analogous to the automatic +insertion of `scala.Function0` around expressions in by-name argument position. + +Implicit function types generalize to `N > 22` in the same way that function types do, see [the corresponding +documentation](../dropped-features/limit22.md). + +## Examples + +See the section on Expressiveness from [Simplicitly: foundations and +applications of implicit function +types](https://dl.acm.org/citation.cfm?id=3158130). + +### Type Checking + +After desugaring no additional typing rules are required for implicit function types. diff --git a/docs/docs/reference/contextual-defaults/implicit-function-types.md b/docs/docs/reference/contextual-defaults/implicit-function-types.md new file mode 100644 index 000000000000..de954862f59b --- /dev/null +++ b/docs/docs/reference/contextual-defaults/implicit-function-types.md @@ -0,0 +1,151 @@ +--- +layout: doc-page +title: "Implicit Function Types" +--- + +_Implicit functions_ are functions with (only) implicit parameters. +Their types are _implicit function types_. Here is an example of an implicit function type: + +```scala +type Executable[T] = (given ExecutionContext) => T +``` +An implicit function is applied to synthesized arguments, in +the same way a method with a given clause is applied. For instance: +```scala + default ec for ExecutionContext = ... + + def f(x: Int): Executable[Int] = ... + + f(2)(given ec) // explicit argument + f(2) // argument is inferred +``` +Conversely, if the expected type of an expression `E` is an implicit function type +`(given T_1, ..., T_n) => U` and `E` is not already an +implicit function literal, `E` is converted to an implicit function literal by rewriting to +```scala + (given x_1: T1, ..., x_n: Tn) => E +``` +where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed +before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` +are available as defaults in `E`. + +Like their types, implicit function literals are written with a `given` parameter clause. +They differ from normal function literals in that their types are implicit function types. + +For example, continuing with the previous definitions, +```scala + def g(arg: Executable[Int]) = ... + + g(22) // is expanded to g((given ev) => 22) + + g(f(2)) // is expanded to g((given ev) => f(2)(given ev)) + + g((given ctx) => f(22)(given ctx)) // is left as it is +``` +### Example: Builder Pattern + +Implicit function types have considerable expressive power. For +instance, here is how they can support the "builder pattern", where +the aim is to construct tables like this: +```scala + table { + row { + cell("top left") + cell("top right") + } + row { + cell("bottom left") + cell("bottom right") + } + } +``` +The idea is to define classes for `Table` and `Row` that allow +addition of elements via `add`: +```scala + class Table { + val rows = new ArrayBuffer[Row] + def add(r: Row): Unit = rows += r + override def toString = rows.mkString("Table(", ", ", ")") + } + + class Row { + val cells = new ArrayBuffer[Cell] + def add(c: Cell): Unit = cells += c + override def toString = cells.mkString("Row(", ", ", ")") + } + + case class Cell(elem: String) +``` +Then, the `table`, `row` and `cell` constructor methods can be defined +with implicit function types as parameters to avoid the plumbing boilerplate +that would otherwise be necessary. +```scala + def table(init: (given Table) => Unit) = { + default t for Table + init + t + } + + def row(init: (given Row) => Unit)(given t: Table) = { + default r for Row + init + t.add(r) + } + + def cell(str: String)(given r: Row) = + r.add(new Cell(str)) +``` +With that setup, the table construction code above compiles and expands to: +```scala + table { (given $t: Table) => + row { (given $r: Row) => + cell("top left")(given $r) + cell("top right")(given $r) + } (given $t) + row { (given $r: Row) => + cell("bottom left")(given $r) + cell("bottom right")(given $r) + } (given $t) + } +``` +### Example: Postconditions + +As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring` so that the checked result can be referred to simply by `result`. The example combines opaque aliases, implicit function types, and extension methods to provide a zero-overhead abstraction. + +```scala +object PostConditions { + opaque type WrappedResult[T] = T + + def result[T](given r: WrappedResult[T]): T = r + + def [T](x: T).ensuring(condition: (given WrappedResult[T]) => Boolean): T = { + assert(condition(given x)) + x + } +} +import PostConditions.{ensuring, result} + +val s = List(1, 2, 3).sum.ensuring(result == 6) +``` +**Explanations**: We use an implicit function type `(given WrappedResult[T]) => Boolean` +as the type of the condition of `ensuring`. An argument to `ensuring` such as +`(result == 6)` will therefore have a default for `WrappedResult[T]` in +scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure +that we do not get unwanted defaults in scope (this is good practice in all cases +where implicit parameters are involved). Since `WrappedResult` is an opaque type alias, its +values need not be boxed, and since `ensuring` is added as an extension method, its argument +does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient +as the best possible code one could write by hand: + +```scala +{ val result = List(1, 2, 3).sum + assert(result == 6) + result +} +``` +### Reference + +For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), +(which uses a different syntax that has been superseded). + +[More details](./implicit-function-types-spec.md) diff --git a/docs/docs/reference/contextual-defaults/motivation.md b/docs/docs/reference/contextual-defaults/motivation.md new file mode 100644 index 000000000000..13e3b4bdaab5 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/motivation.md @@ -0,0 +1,80 @@ +--- +layout: doc-page +title: "Overview" +--- + +### Critique of the Status Quo + +Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. + +Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) +or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. + +Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. + +Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. + +Particular criticisms are: + +1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions + + ```scala + implicit def i1(implicit x: T): C[T] = ... + implicit def i2(x: T): C[T] = ... + ``` + + the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. + + 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. + + 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. + + 4. The syntax of implicit parameters also has shortcomings. While implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in + ```scala + def currentMap(implicit ctx: Context): Map[String, Int] + ``` + one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. + + 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. + +None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. + +Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and we picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. + +Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. + +### The New Design + +The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: + + 1. [Defaults](./defaults.md) are a new way to define basic terms that can be synthesized. They replace implicit definitions. The core principle of the proposal is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. + + 2. [Given Clauses](./given-clauses.md) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. It also allows us to have several implicit parameter sections, and to have implicit parameters followed by normal ones. + + 3. [Default Imports](./default-imports.md) are a new class of imports that specifically import defaults and nothing else. Defaults _must be_ imported with defaults imports, a plain import will no longer bring them into scope. + + 4. [Implicit Conversions](./conversions.md) are now expressed as defaults of a standard `Conversion` class. All other forms of implicit conversions will be phased out. + +This section also contains pages describing other language features that are related to context abstraction. These are: + + - [Context Bounds](./context-bounds.md), which carry over unchanged. + - [Extension Methods](./extension-methods.md) replace implicit classes in a way that integrates better with typeclasses. + - [Implementing Typeclasses](./typeclasses.md) demonstrates how some common typeclasses can be implemented using the new constructs. + - [Typeclass Derivation](./derivation.md) introduces constructs to automatically derive typeclass instances for ADTs. + - [Multiversal Equality](./multiversal-equality.md) introduces a special typeclass to support type safe equality. + - [Implicit Function Types](./implicit-function-types.md) provide a way to abstract over given clauses. + - [Implicit By-Name Parameters](./implicit-by-name-parameters.md) are an essential tool to define recursive synthesized values without looping. + - [Relationship with Scala 2 Implicits](./relationship-implicits.md) discusses the relationship between old-style implicits and defaults and how to migrate from one to the other. + +Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define default instances instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import defaults that does not allow them to hide in a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. + +This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. + +Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. + + - First, some of the problems are clearly syntactic and require different syntax to solve them. + - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution + - Third, even if we would somehow succeed with migration, we still have the problem + how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have + to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. + diff --git a/docs/docs/reference/contextual-defaults/multiversal-equality.md b/docs/docs/reference/contextual-defaults/multiversal-equality.md new file mode 100644 index 000000000000..9f88680b0cc7 --- /dev/null +++ b/docs/docs/reference/contextual-defaults/multiversal-equality.md @@ -0,0 +1,214 @@ +--- +layout: doc-page +title: "Multiversal Equality" +--- + +Previously, Scala had universal equality: Two values of any types +could be compared with each other with `==` and `!=`. This came from +the fact that `==` and `!=` are implemented in terms of Java's +`equals` method, which can also compare values of any two reference +types. + +Universal equality is convenient. But it is also dangerous since it +undermines type safety. For instance, let's assume one is left after some refactoring +with an erroneous program where a value `y` has type `S` instead of the correct type `T`. + +```scala +val x = ... // of type T +val y = ... // of type S, but should be T +x == y // typechecks, will always yield false +``` + +If `y` gets compared to other values of type `T`, +the program will still typecheck, since values of all types can be compared with each other. +But it will probably give unexpected results and fail at runtime. + +Multiversal equality is an opt-in way to make universal equality +safer. It uses a binary typeclass `Eql` to indicate that values of +two given types can be compared with each other. +The example above would not typecheck if `S` or `T` was a class +that derives `Eql`, e.g. +```scala +class T derives Eql +``` +Alternatively, one can also provide an `Eql` default instance directly, like this: +```scala +default for Eql[T, T] = Eql.derived +``` +This definition effectively says that values of type `T` can (only) be +compared to other values of type `T` when using `==` or `!=`. The definition +affects type checking but it has no significance for runtime +behavior, since `==` always maps to `equals` and `!=` always maps to +the negation of `equals`. The right hand side `Eql.derived` of the definition +is a value that has any `Eql` instance as its type. Here is the definition of class +`Eql` and its companion object: +```scala +package scala +import annotation.implicitNotFound + +@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") +sealed trait Eql[-L, -R] + +object Eql { + object derived extends Eql[Any, Any] +} +``` + +One can have several `Eql` defaults that mention a type. For example, the four +definitions below make values of type `A` and type `B` comparable with +each other, but not comparable to anything else: + +```scala +default for Eql[A, A] = Eql.derived +default for Eql[B, B] = Eql.derived +default for Eql[A, B] = Eql.derived +default for Eql[B, A] = Eql.derived +``` +The `scala.Eql` object defines a number of `Eql` instances that together +define a rule book for what standard types can be compared (more details below). + +There's also a "fallback" instance named `eqlAny` that allows comparisons +over all types that do not themselves have an `Eql` default. `eqlAny` is defined as follows: + +```scala +def eqlAny[L, R]: Eql[L, R] = Eql.derived +``` + +Even though `eqlAny` is not declared a default, the compiler will still construct an `eqlAny` +instance as answer to an implicit search for the +type `Eql[L, R]`, unless `L` or `R` have `Eql` defaults +defined on them, or the language feature `strictEquality` is enabled + +The primary motivation for having `eqlAny` is backwards compatibility, +if this is of no concern, one can disable `eqlAny` by enabling the language +feature `strictEquality`. As for all language features this can be either +done with an import + +```scala +import scala.language.strictEquality +``` +or with a command line option `-language:strictEquality`. + +## Deriving Eql Instances + +Instead of defining `Eql` instances directly, it is often more convenient to derive them. Example: +```scala +class Box[T](x: T) derives Eql +``` +By the usual rules of [typeclass derivation](./derivation.md), +this generates the following `Eql` instance in the companion object of `Box`: +```scala +default [T, U](given Eql[T, U]) for Eql[Box[T], Box[U]] = Eql.derived +``` +That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: +```scala +new Box(1) == new Box(1L) // ok since there is an instance for `Eql[Int, Long]` +new Box(1) == new Box("a") // error: can't compare +new Box(1) == 1 // error: can't compare +``` + +## Precise Rules for Equality Checking + +The precise rules for equality checking are as follows. + +If the `strictEquality` feature is enabled then +a comparison using `x == y` or `x != y` between values `x: T` and `y: U` +is legal if there is a default for `Eql[T, U]`. + +In the default case where the `strictEquality` feature is not enabled the comparison is +also legal if + + 1. `T` and `U` are the same, or + 2. one of `T`, `U` is a subtype of the _lifted_ version of the other type, or + 3. neither `T` nor `U` have a _reflexive_ `Eql` instance. + +Explanations: + + - _lifting_ a type `S` means replacing all references to abstract types + in covariant positions of `S` by their upper bound, and to replacing + all refinement types in covariant positions of `S` by their parent. + - a type `T` has a _reflexive_ `Eql` instance if `summon[Eql[T, T]]` + succeeds. + +## Predefined Eql Instances + +The `Eql` object defines instances for comparing + - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, + - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, + - `scala.collection.Seq`, and `scala.collection.Set`. + +Instances are defined so that every one of these types has a _reflexive_ `Eql` instance, and the following holds: + + - Primitive numeric types can be compared with each other. + - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). + - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). + - `Char` can be compared with `java.lang.Character` (and _vice versa_). + - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared + with each other if their element types can be compared. The two sequence types + need not be the same. + - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared + with each other if their element types can be compared. The two set types + need not be the same. + - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). + +## Why Two Type Parameters? + +One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional +implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, +we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. + +Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: +```scala +class List[+T] { + ... + def contains(x: Any): Boolean +} +``` +That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition +```scala + def contains(x: T): Boolean +``` +does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: +```scala + def contains[U >: T](x: U): Boolean +``` +This generic version of `contains` is the one used in the current (Scala 2.13) version of `List`. +It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. +However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: +```scala + def contains[U >: T](x: U)(given Eql[T, U]): Boolean // (1) +``` +This version of `contains` is equality-safe! More precisely, given +`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if +`x == y` is type-correct. + +Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: +```scala + def contains[U >: T](x: U)(given Eql1[U]): Boolean // (2) +``` +This version could be applied just as widely as the original `contains(x: Any)` method, +since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` instance. That rule simply cannot be expressed if there is a single type parameter for `Eql`. + +The situation is different under `-language:strictEquality`. In that case, +the `Eql[Any, Any]` or `Eql1[Any]` instances would never be available, and the +single and two-parameter versions would indeed coincide for most practical purposes. + +But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. +So it can be done almost at any time, modulo binary compatibility concerns. +On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` +unusable for all types that have not yet declared an `Eql1` instance, including all +types coming from Java. This is clearly unacceptable. It would lead to a situation where, +rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. + +For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. + +In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as +```scala +type Eq[-T] = Eql[T, T] +``` +Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only +work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. + + +More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) +and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-defaults/relationship-implicits.md b/docs/docs/reference/contextual-defaults/relationship-implicits.md new file mode 100644 index 000000000000..35cd3af266da --- /dev/null +++ b/docs/docs/reference/contextual-defaults/relationship-implicits.md @@ -0,0 +1,189 @@ +--- +layout: doc-page +title: Relationship with Scala 2 Implicits +--- + +Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. + +## Simulating Scala 3 Contextual Abstraction Concepts with Scala 2 Implicits + +### Defaults + +Defaults can be mapped to combinations of implicit objects, classes and implicit methods. + + 1. Defaults without parameters are mapped to implicit objects. E.g., + ```scala + default intOrd for Ord[Int] { ... } + ``` + maps to + ```scala + implicit object IntOrd extends Ord[Int] { ... } + ``` + 2. Parameterized defaults are mapped to combinations of classes and implicit methods. E.g., + ```scala + default listOrd[T](given ord: Ord[T]) for Ord[List[T]] { ... } + ``` + maps to + ```scala + class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } + final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] + ``` + 3. Alias defaults map to implicit methods or implicit lazy vals. If an alias has neither type nor implicit parameters, + it is treated as a lazy val, unless the right hand side is a simple reference, in which case we can use a forwarder to + that reference without caching it. + +Examples: +```scala +default global for ExecutionContext = new ForkJoinContext() + +val ctx: Context +default for Context = ctx +``` +would map to +```scala +final implicit lazy val global: ExecutionContext = new ForkJoinContext() +final implicit def default_Context = ctx +``` + +### Anonymous Defaults + +Anonymous defaults get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For example, if the names of the `intOrd` and `listOrd` defaults above were left out, the following names would be synthesized instead: +```scala +default default_Ord_Int for Ord[Int] { ... } +default default_Ord_List[T] for Ord[List[T]] { ... } +``` +The synthesized type names are formed from + + - the prefix `default_`, + - the simple name(s) of the implemented type(s), leaving out any prefixes, + - the simple name(s) of the toplevel argument type constructors to these types. + +Tuples are treated as transparent, i.e. a type `F[(X, Y)]` would get the synthesized name +`F_X_Y`. Directly implemented function types `A => B` are represented as `A_to_B`. Function types used as arguments to other type constructors are represented as `Function`. + +### Anonymous Collective Extensions + +Anonymous collective extensions also get compiler synthesized names, which are formed from + + - the prefix `extension_` + - the name of the first defined extension method + - the simple name of the first parameter type of this extension method + - the simple name(s) of the toplevel argument type constructors to this type. + +For example, the extension +```scala +extension for [T] (xs: List[T]) with { + def second = ... +} +``` +gets the synthesized name `extension_second_List_T`. + +### Given Clauses + +Given clauses correspond largely to Scala-2's implicit parameter clauses. E.g. +```scala +def max[T](x: T, y: T)(given ord: Ord[T]): T +``` +would be written +```scala +def max[T](x: T, y: T)(implicit ord: Ord[T]): T +``` +in Scala 2. The main difference concerns applications of such parameters. +Explicit arguments to parameters of given clauses _must_ be written using `given`, +mirroring the definition syntax. E.g, `max(2, 3)(given IntOrd`). +Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. + +The `summon` method corresponds to `implicitly` in Scala 2. +It is precisely the same as the `the` method in Shapeless. +The difference between `summon` (or `the`) and `implicitly` is +that `summon` can return a more precise type than the type that was +asked for. + +### Context Bounds + +Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. + +**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or +with a normal application. Once old-style implicits are deprecated, context bounds +will map to given clauses instead. + +### Extension Methods + +Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method +```scala +def (c: Circle).circumference: Double = c.radius * math.Pi * 2 +``` +could be simulated to some degree by +```scala +implicit class CircleDecorator(c: Circle) extends AnyVal { + def circumference: Double = c.radius * math.Pi * 2 +} +``` +Abstract extension methods in traits that are implemented in defaults have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. + +### Typeclass Derivation + +Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. + +### Implicit Function Types + +Implicit function types have no analogue in Scala 2. + +### Implicit By-Name Parameters + +Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. + +## Simulating Scala 2 Implicits in Scala 3 + +### Implicit Conversions + +Implicit conversion methods in Scala 2 can be expressed as default instances of the `scala.Conversion` class in Dotty. E.g. instead of +```scala +implicit def stringToToken(str: String): Token = new Keyword(str) +``` +one can write +```scala +default stringToToken for Conversion[String, Token] { + def apply(str: String): Token = KeyWord(str) +} +``` +or +```scala +default stringToToken for Conversion[String, Token] = KeyWord(_) +``` + +### Implicit Classes + +Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and a default instance of the `Conversion` class. + +### Implicit Values + +Implicit `val` definitions in Scala 2 can be expressed in Dotty using a regular `val` definition and an alias default. +E.g., Scala 2's +```scala +lazy implicit val pos: Position = tree.sourcePos +``` +can be expressed in Dotty as +```scala +lazy val pos: Position = tree.sourcePos +witness of Position = pos +``` + +### Abstract Implicits + +An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an alias witness. E.g., Scala 2's +```scala +implicit def symDecorator: SymDecorator +``` +can be expressed in Dotty as +```scala +def symDecorator: SymDecorator +witness of SymDecorator = symDecorator +``` + +## Implementation Status and Timeline + +The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. +Migration to the new abstractions will be supported by making automatic rewritings available. + +Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-defaults/typeclasses.md b/docs/docs/reference/contextual-defaults/typeclasses.md new file mode 100644 index 000000000000..7f98c560940f --- /dev/null +++ b/docs/docs/reference/contextual-defaults/typeclasses.md @@ -0,0 +1,66 @@ +--- +layout: doc-page +title: "Implementing Typeclasses" +--- + +Given instances, extension methods and context bounds +allow a concise and natural expression of _typeclasses_. Typeclasses are just traits +with canonical implementations defined by given instances. Here are some examples of standard typeclasses: + +### Semigroups and monoids: + +```scala +trait SemiGroup[T] { + @infix def (x: T) combine (y: T): T +} + +trait Monoid[T] extends SemiGroup[T] { + def unit: T +} + +object Monoid { + def apply[T](given m: Monoid[T]) = m +} + +default for Monoid[String] { + def (x: String) combine (y: String): String = x.concat(y) + def unit: String = "" +} + +default for Monoid[Int] { + def (x: Int) combine (y: Int): Int = x + y + def unit: Int = 0 +} + +def sum[T: Monoid](xs: List[T]): T = + xs.foldLeft(Monoid[T].unit)(_ combine _) +``` + +### Functors and monads: + +```scala +trait Functor[F[_]] { + def [A, B](x: F[A]).map(f: A => B): F[B] +} + +trait Monad[F[_]] extends Functor[F] { + def [A, B](x: F[A]).flatMap(f: A => F[B]): F[B] + def [A, B](x: F[A]).map(f: A => B) = x.flatMap(f `andThen` pure) + + def pure[A](x: A): F[A] +} + +default listMonad for Monad[List] { + def [A, B](xs: List[A]).flatMap(f: A => List[B]): List[B] = + xs.flatMap(f) + def pure[A](x: A): List[A] = + List(x) +} + +default readerMonad[Ctx] for Monad[[X] =>> Ctx => X] { + def [A, B](r: Ctx => A).flatMap(f: A => Ctx => B): Ctx => B = + ctx => f(r(ctx))(ctx) + def pure[A](x: A): Ctx => A = + ctx => x +} +``` diff --git a/docs/docs/reference/contextual-witnesses/context-bounds.md b/docs/docs/reference/contextual-witnesses/context-bounds.md new file mode 100644 index 000000000000..b80285827a50 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/context-bounds.md @@ -0,0 +1,30 @@ +--- +layout: doc-page +title: "Context Bounds" +--- + +## Context Bounds + +A context bound is a shorthand for expressing the common pattern of an implicit parameter that depends on a type parameter. Using a context bound, the `maximum` function of the last section can be written like this: +```scala +def maximum[T: Ord](xs: List[T]): T = xs.reduceLeft(max) +``` +A bound like `: Ord` on a type parameter `T` of a method or class indicates an implicit parameter `(given Ord[T])`. The implicit parameter(s) generated from context bounds come last in the definition of the containing method or class. E.g., +```scala +def f[T: C1 : C2, U: C3](x: T)(given y: U, z: V): R +``` +would expand to +```scala +def f[T, U](x: T)(given y: U, z: V)(given C1[T], C2[T], C3[U]): R +``` +Context bounds can be combined with subtype bounds. If both are present, subtype bounds come first, e.g. +```scala +def g[T <: B : C](x: T): R = ... +``` + +## Syntax + +``` +TypeParamBounds ::= [SubtypeBounds] {ContextBound} +ContextBound ::= ‘:’ Type +``` diff --git a/docs/docs/reference/contextual-witnesses/conversions.md b/docs/docs/reference/contextual-witnesses/conversions.md new file mode 100644 index 000000000000..a3c4ffcfd35b --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/conversions.md @@ -0,0 +1,75 @@ +--- +layout: doc-page +title: "Implicit Conversions" +--- + +Implicit conversions are defined by witnesses of the `scala.Conversion` class. +This class is defined in package `scala` as follows: +```scala +abstract class Conversion[-T, +U] extends (T => U) +``` +For example, here is an implicit conversion from `String` to `Token`: +```scala +witness of Conversion[String, Token] { + def apply(str: String): Token = new KeyWord(str) +} +``` +Using an alias this can be expressed more concisely as: +```scala +witness of Conversion[String, Token] = new KeyWord(_) +``` +An implicit conversion is applied automatically by the compiler in three situations: + +1. If an expression `e` has type `T`, and `T` does not conform to the expression's expected type `S`. +2. In a selection `e.m` with `e` of type `T`, but `T` defines no member `m`. +3. In an application `e.m(args)` with `e` of type `T`, if `T` does define + some member(s) named `m`, but none of these members can be applied to the arguments `args`. + +In the first case, the compiler looks for a `scala.Conversion` witness that maps +an argument of type `T` to type `S`. In the second and third +case, it looks for a `scala.Conversion` witness that maps an argument of type `T` +to a type that defines a member `m` which can be applied to `args` if present. +If such a witness `C` is found, the expression `e` is replaced by `C.apply(e)`. + +## Examples + +1. The `Predef` package contains "auto-boxing" conversions that map +primitive number types to subclasses of `java.lang.Number`. For instance, the +conversion from `Int` to `java.lang.Integer` can be defined as follows: +```scala +witness int2Integer of Conversion[Int, java.lang.Integer] = + java.lang.Integer.valueOf(_) +``` + +2. The "magnet" pattern is sometimes used to express many variants of a method. Instead of defining overloaded versions of the method, one can also let the method take one or more arguments of specially defined "magnet" types, into which various argument types can be converted. E.g. +```scala +object Completions { + + // The argument "magnet" type + enum CompletionArg { + case Error(s: String) + case Response(f: Future[HttpResponse]) + case Status(code: Future[StatusCode]) + } + object CompletionArg { + + // conversions defining the possible arguments to pass to `complete` + // these always come with CompletionArg + // They can be invoked explicitly, e.g. + // + // CompletionArg.fromStatusCode(statusCode) + + witness fromString of Conversion[String, CompletionArg] = Error(_) + witness fromFuture of Conversion[Future[HttpResponse], CompletionArg] = Response(_) + witness fromStatusCode of Conversion[Future[StatusCode], CompletionArg] = Status(_) + } + import CompletionArg._ + + def complete[T](arg: CompletionArg) = arg match { + case Error(s) => ... + case Response(f) => ... + case Status(code) => ... + } +} +``` +This setup is more complicated than simple overloading of `complete`, but it can still be useful if normal overloading is not available (as in the case above, since we cannot have two overloaded methods that take `Future[...]` arguments), or if normal overloading would lead to a combinatorial explosion of variants. diff --git a/docs/docs/reference/contextual-witnesses/derivation.md b/docs/docs/reference/contextual-witnesses/derivation.md new file mode 100644 index 000000000000..7a70e0d4a6b8 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/derivation.md @@ -0,0 +1,399 @@ +--- +layout: doc-page +title: Type Class Derivation +--- + +Type class derivation is a way to automatically generate witnesses for type classes which satisfy some simple +conditions. A type class in this sense is any trait or class with a type parameter determining the type being operated +on. Common examples are `Eq`, `Ordering`, or `Show`. For example, given the following `Tree` algebraic data type +(ADT), + +```scala +enum Tree[T] derives Eq, Ordering, Show { + case Branch[T](left: Tree[T], right: Tree[T]) + case Leaf[T](elem: T) +} +``` + +The `derives` clause generates the following witnesses for the `Eq`, `Ordering` and `Show` type classes in the +companion object of `Tree`, + +```scala +witness [T: Eq] of Eq[Tree[T]] = Eq.derived +witness [T: Ordering] of Ordering[Tree] = Ordering.derived +witness [T: Show] of Show[Tree] = Show.derived +``` + +We say that `Tree` is the _deriving type_ and that the `Eq`, `Ordering` and `Show` instances are _derived instances_. + +### Types supporting `derives` clauses + +All data types can have a `derives` clause. This document focuses primarily on data types which also have a witness +of the `Mirror` type class available. Witnesses of the `Mirror` type class are generated automatically by the compiler +for, + ++ enums and enum cases ++ case classes and case objects ++ sealed classes or traits that have only case classes and case objects as children + +`Mirror` type class witnesses provide information at the type level about the components and labelling of the type. +They also provide minimal term level infrastructure to allow higher level libraries to provide comprehensive +derivation support. + +```scala +sealed trait Mirror { + + /** the type being mirrored */ + type MirroredType + + /** the type of the elements of the mirrored type */ + type MirroredElemTypes + + /** The mirrored *-type */ + type MirroredMonoType + + /** The name of the type */ + type MirroredLabel <: String + + /** The names of the elements of the type */ + type MirroredElemLabels <: Tuple +} + +object Mirror { + /** The Mirror for a product type */ + trait Product extends Mirror { + + /** Create a new instance of type `T` with elements taken from product `p`. */ + def fromProduct(p: scala.Product): MirroredMonoType + } + + trait Sum extends Mirror { self => + /** The ordinal number of the case class of `x`. For enums, `ordinal(x) == x.ordinal` */ + def ordinal(x: MirroredMonoType): Int + } +} +``` + +Product types (i.e. case classes and objects, and enum cases) have mirrors which are subtypes of `Mirror.Product`. Sum +types (i.e. sealed class or traits with product children, and enums) have mirrors which are subtypes of `Mirror.Sum`. + +For the `Tree` ADT from above the following `Mirror` instances will be automatically provided by the compiler, + +```scala +// Mirror for Tree +Mirror.Sum { + type MirroredType = Tree + type MirroredElemTypes[T] = (Branch[T], Leaf[T]) + type MirroredMonoType = Tree[_] + type MirroredLabels = "Tree" + type MirroredElemLabels = ("Branch", "Leaf") + + def ordinal(x: MirroredMonoType): Int = x match { + case _: Branch[_] => 0 + case _: Leaf[_] => 1 + } +} + +// Mirror for Branch +Mirror.Product { + type MirroredType = Branch + type MirroredElemTypes[T] = (Tree[T], Tree[T]) + type MirroredMonoType = Branch[_] + type MirroredLabels = "Branch" + type MirroredElemLabels = ("left", "right") + + def fromProduct(p: Product): MirroredMonoType = + new Branch(...) +} + +// Mirror for Leaf +Mirror.Product { + type MirroredType = Leaf + type MirroredElemTypes[T] = Tuple1[T] + type MirroredMonoType = Leaf[_] + type MirroredLabels = "Leaf" + type MirroredElemLabels = Tuple1["elem"] + + def fromProduct(p: Product): MirroredMonoType = + new Leaf(...) +} +``` + +Note the following properties of `Mirror` types, + ++ Properties are encoded using types rather than terms. This means that they have no runtime footprint unless used and + also that they are a compile time feature for use with Dotty's metaprogramming facilities. ++ The kinds of `MirroredType` and `MirroredElemTypes` match the kind of the data type the mirror is an instance for. + This allows `Mirrors` to support ADTs of all kinds. ++ There is no distinct representation type for sums or products (ie. there is no `HList` or `Coproduct` type as in + Scala 2 versions of shapeless). Instead the collection of child types of a data type is represented by an ordinary, + possibly parameterized, tuple type. Dotty's metaprogramming facilities can be used to work with these tuple types + as-is, and higher level libraries can be built on top of them. ++ The methods `ordinal` and `fromProduct` are defined in terms of `MirroredMonoType` which is the type of kind-`*` + which is obtained from `MirroredType` by wildcarding its type parameters. + +### Type classes supporting automatic deriving + +A trait or class can appear in a `derives` clause if its companion object defines a method named `derived`. The +signature and implementation of a `derived` method for a type class `TC[_]` are arbitrary but it is typically of the +following form, + +```scala +def derived[T](given Mirror.Of[T]): TC[T] = ... +``` + +That is, the `derived` method takes an implicit parameter of (some subtype of) type `Mirror` which defines the shape of +the deriving type `T`, and computes the type class implementation according to that shape. This is all that the +provider of an ADT with a `derives` clause has to know about the derivation of a type class instance. + +Note that `derived` methods may have given `Mirror` arguments indirectly (e.g. by having a given argument which in turn +has a given `Mirror`, or not at all (e.g. they might use some completely different user-provided mechanism, for +instance using Dotty macros or runtime reflection). We expect that (direct or indirect) `Mirror` based implementations +will be the most common and that is what this document emphasises. + +Type class authors will most likely use higher level derivation or generic programming libraries to implement +`derived` methods. An example of how a `derived` method might be implemented using _only_ the low level facilities +described above and Dotty's general metaprogramming features is provided below. It is not anticipated that type class +authors would normally implement a `derived` method in this way, however this walkthrough can be taken as a guide for +authors of the higher level derivation libraries that we expect typical type class authors will use (for a fully +worked out example of such a library, see [shapeless 3](https://github.com/milessabin/shapeless/tree/shapeless-3)). + +#### How to write a type class `derived` method using low level mechanisms + +The low-level method we will use to implement a type class `derived` method in this example exploits three new +type-level constructs in Dotty: inline methods, inline matches, and implicit searches via `summonFrom`. Given this definition of the +`Eq` type class, + + +```scala +trait Eq[T] { + def eqv(x: T, y: T): Boolean +} +``` + +we need to implement a method `Eq.derived` on the companion object of `Eq` that produces a witness for `Eq[T]` given +a `Mirror[T]`. Here is a possible implementation, + +```scala +inline witness derived[T](given m: Mirror.Of[T]) of Eq[T] = { + val elemInstances = summonAll[m.MirroredElemTypes] // (1) + inline m match { // (2) + case s: Mirror.SumOf[T] => eqSum(s, elemInstances) + case p: Mirror.ProductOf[T] => eqProduct(p, elemInstances) + } +} +``` + +Note that `derived` is defined as an `inline` witness. This means that the method will be expanded at +call sites (for instance the compiler generated instance definitions in the companion objects of ADTs which have a +`derived Eq` clause), and also that it can be used recursively if necessary, to compute instances for children. + +The body of this method (1) first materializes the `Eq` instances for all the child types of type the instance is +being derived for. This is either all the branches of a sum type or all the fields of a product type. The +implementation of `summonAll` is `inline` and uses Dotty's `summonFrom` construct to collect the instances as a +`List`, + +```scala +inline def summonAll[T]: T = summonFrom { + case t: T => t +} + +inline def summonAll[T <: Tuple]: List[Eq[_]] = inline erasedValue[T] match { + case _: Unit => Nil + case _: (t *: ts) => summon[Eq[t]] :: summonAll[ts] +} +``` + +with the instances for children in hand the `derived` method uses an `inline match` to dispatch to methods which can +construct instances for either sums or products (2). Note that because `derived` is `inline` the match will be +resolved at compile-time and only the left-hand side of the matching case will be inlined into the generated code with +types refined as revealed by the match. + +In the sum case, `eqSum`, we use the runtime `ordinal` values of the arguments to `eqv` to first check if the two +values are of the same subtype of the ADT (3) and then, if they are, to further test for equality based on the `Eq` +instance for the appropriate ADT subtype using the auxiliary method `check` (4). + +```scala +def eqSum[T](s: Mirror.SumOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = { + val ordx = s.ordinal(x) // (3) + (s.ordinal(y) == ordx) && check(elems(ordx))(x, y) // (4) + } + } +``` + +In the product case, `eqProduct` we test the runtime values of the arguments to `eqv` for equality as products based +on the `Eq` instances for the fields of the data type (5), + +```scala +def eqProduct[T](p: Mirror.ProductOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = + iterator(x).zip(iterator(y)).zip(elems.iterator).forall { // (5) + case ((x, y), elem) => check(elem)(x, y) + } + } +``` + +Pulling this all together we have the following complete implementation, + +```scala +import scala.deriving._ +import scala.compiletime.erasedValue + +inline def summon[T]: T = given match { + case t: T => t +} + +inline def summonAll[T <: Tuple]: List[Eq[_]] = inline erasedValue[T] match { + case _: Unit => Nil + case _: (t *: ts) => summon[Eq[t]] :: summonAll[ts] +} + +trait Eq[T] { + def eqv(x: T, y: T): Boolean +} + +object Eq { + given Eq[Int] { + def eqv(x: Int, y: Int) = x == y + } + + def check(elem: Eq[_])(x: Any, y: Any): Boolean = + elem.asInstanceOf[Eq[Any]].eqv(x, y) + + def iterator[T](p: T) = p.asInstanceOf[Product].productIterator + + def eqSum[T](s: Mirror.SumOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = { + val ordx = s.ordinal(x) + (s.ordinal(y) == ordx) && check(elems(ordx))(x, y) + } + } + + def eqProduct[T](p: Mirror.ProductOf[T], elems: List[Eq[_]]): Eq[T] = + new Eq[T] { + def eqv(x: T, y: T): Boolean = + iterator(x).zip(iterator(y)).zip(elems.iterator).forall { + case ((x, y), elem) => check(elem)(x, y) + } + } + + inline given derived[T]: (m: Mirror.Of[T]) => Eq[T] = { + val elemInstances = summonAll[m.MirroredElemTypes] + inline m match { + case s: Mirror.SumOf[T] => eqSum(s, elemInstances) + case p: Mirror.ProductOf[T] => eqProduct(p, elemInstances) + } + } +} +``` + +we can test this relative to a simple ADT like so, + +```scala +enum Opt[+T] derives Eq { + case Sm(t: T) + case Nn +} + +object Test extends App { + import Opt._ + val eqoi = summon[Eq[Opt[Int]]] + assert(eqoi.eqv(Sm(23), Sm(23))) + assert(!eqoi.eqv(Sm(23), Sm(13))) + assert(!eqoi.eqv(Sm(23), Nn)) +} +``` + +In this case the code that is generated by the inline expansion for the derived `Eq` instance for `Opt` looks like the +following, after a little polishing, + +```scala +given derived$Eq[T]: (eqT: Eq[T]) => Eq[Opt[T]] = + eqSum(summon[Mirror[Opt[T]]], + List( + eqProduct(summon[Mirror[Sm[T]]], List(summon[Eq[T]])) + eqProduct(summon[Mirror[Nn.type]], Nil) + ) + ) +``` + +Alternative approaches can be taken to the way that `derived` methods can be defined. For example, more aggressively +inlined variants using Dotty macros, whilst being more involved for type class authors to write than the example +above, can produce code for type classes like `Eq` which eliminate all the abstraction artefacts (eg. the `Lists` of +child instances in the above) and generate code which is indistinguishable from what a programmer might write by hand. +As a third example, using a higher level library such as shapeless the type class author could define an equivalent +`derived` method as, + +```scala +given eqSum[A] (inst: => K0.CoproductInstances[Eq, A]) => Eq[A] { + def eqv(x: A, y: A): Boolean = inst.fold2(x, y)(false)( + [t] => (eqt: Eq[t], t0: t, t1: t) => eqt.eqv(t0, t1) + ) +} + +given eqProduct[A] (inst: K0.ProductInstances[Eq, A]) => Eq[A] { + def eqv(x: A, y: A): Boolean = inst.foldLeft2(x, y)(true: Boolean)( + [t] => (acc: Boolean, eqt: Eq[t], t0: t, t1: t) => Complete(!eqt.eqv(t0, t1))(false)(true) + ) +} + + +inline def derived[A](given gen: K0.Generic[A]): Eq[A] = gen.derive(eqSum, eqProduct) +``` + +The framework described here enables all three of these approaches without mandating any of them. + +### Deriving instances elsewhere + +Sometimes one would like to derive a type class instance for an ADT after the ADT is defined, without being able to +change the code of the ADT itself. To do this, simply define an instance using the `derived` method of the type class +as right-hand side. E.g, to implement `Ordering` for `Option` define, + +```scala +given [T: Ordering] : Ordering[Option[T]] = Ordering.derived +``` + +Assuming the `Ordering.derived` method has a given parameter of type `Mirror[T]` it will be satisfied by the +compiler generated `Mirror` instance for `Option` and the derivation of the instance will be expanded on the right +hand side of this definition in the same way as an instance defined in ADT companion objects. + +### Syntax + +``` +Template ::= InheritClauses [TemplateBody] +EnumDef ::= id ClassConstr InheritClauses EnumBody +InheritClauses ::= [‘extends’ ConstrApps] [‘derives’ QualId {‘,’ QualId}] +ConstrApps ::= ConstrApp {‘with’ ConstrApp} + | ConstrApp {‘,’ ConstrApp} +``` + +### Discussion + +This type class derivation framework is intentionally very small and low-level. There are essentially two pieces of +infrastructure in compiler-generated `Mirror` instances, + ++ type members encoding properties of the mirrored types. ++ a minimal value level mechanism for working generically with terms of the mirrored types. + +The `Mirror` infrastructure can be seen as an extension of the existing `Product` infrastructure for case classes: +typically `Mirror` types will be implemented by the ADTs companion object, hence the type members and the `ordinal` or +`fromProduct` methods will be members of that object. The primary motivation for this design decision, and the +decision to encode properties via types rather than terms was to keep the bytecode and runtime footprint of the +feature small enough to make it possible to provide `Mirror` instances _unconditionally_. + +Whilst `Mirrors` encode properties precisely via type members, the value level `ordinal` and `fromProduct` are +somewhat weakly typed (because they are defined in terms of `MirroredMonoType`) just like the members of `Product`. +This means that code for generic type classes has to ensure that type exploration and value selection proceed in +lockstep and it has to assert this conformance in some places using casts. If generic type classes are correctly +written these casts will never fail. + +As mentioned, however, the compiler-provided mechansim is intentionally very low level and it is anticipated that +higher level type class derivation and generic programming libraries will build on this and Dotty's other +metaprogramming facilities to hide these low-level details from type class authors and general users. Type class +derivation in the style of both shapeless and Magnolia are possible (a prototype of shapeless 3, which combines +aspects of both shapeless 2 and Magnolia has been developed alongside this language feature) as is a more aggressively +inlined style, supported by Dotty's new quote/splice macro and inlining facilities. diff --git a/docs/docs/reference/contextual-witnesses/extension-methods.md b/docs/docs/reference/contextual-witnesses/extension-methods.md new file mode 100644 index 000000000000..02073766beee --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/extension-methods.md @@ -0,0 +1,183 @@ +--- +layout: doc-page +title: "Extension Methods" +--- + +Extension methods allow one to add methods to a type after the type is defined. Example: + +```scala +case class Circle(x: Double, y: Double, radius: Double) + +def (c: Circle).circumference: Double = c.radius * math.Pi * 2 +``` + +Like regular methods, extension methods can be invoked with infix `.`: + +```scala +val circle = Circle(0, 0, 1) +circle.circumference +``` + +### Translation of Extension Methods + +Extension methods are methods that have a parameter clause in front of the defined +identifier. They translate to methods where the leading parameter section is moved +to after the defined identifier. So, the definition of `circumference` above translates +to the plain method, and can also be invoked as such: +```scala +def circumference(c: Circle): Double = c.radius * math.Pi * 2 + +assert(circle.circumference == circumference(circle)) +``` + +### Translation of Calls to Extension Methods + +When is an extension method applicable? There are two possibilities. + + - An extension method is applicable if it is visible under a simple name, by being defined + or inherited or imported in a scope enclosing the application. + - An extension method is applicable if it is a member of some witness that is eligible at the point of the application. + +As an example, consider an extension method `longestStrings` on `Seq[String]` defined in a trait `StringSeqOps`. + +```scala +trait StringSeqOps { + def (xs: Seq[String]).longestStrings = { + val maxLength = xs.map(_.length).max + xs.filter(_.length == maxLength) + } +} +``` +We can make the extension method available by defining a `StringSeqOps` witness, like this: +```scala +witness ops1 of StringSeqOps +``` +Then +```scala +List("here", "is", "a", "list").longestStrings +``` +is legal everywhere `ops1` is available. Alternatively, we can define `longestStrings` as a member of a normal object. But then the method has to be brought into scope to be usable as an extension method. + +```scala +object ops2 extends StringSeqOps +import ops2.longestStrings +List("here", "is", "a", "list").longestStrings +``` +The precise rules for resolving a selection to an extension method are as follows. + +Assume a selection `e.m[Ts]` where `m` is not a member of `e`, where the type arguments `[Ts]` are optional, +and where `T` is the expected type. The following two rewritings are tried in order: + + 1. The selection is rewritten to `m[Ts](e)`. + 2. If the first rewriting does not typecheck with expected type `T`, and there is a witness `w` + in either the current scope or in the implicit scope of `T` such that `w` defines an extension + method named `m`, then selection is expanded to `w.m[Ts](e)`. + This second rewriting is attempted at the time where the compiler also tries an implicit conversion + from `T` to a type containing `m`. If there is more than one way of rewriting, an ambiguity error results. + +So `circle.circumference` translates to `CircleOps.circumference(circle)`, provided +`circle` has type `Circle` and `CircleOps` is eligible (i.e. it is visible at the point of call or it is defined in the companion object of `Circle`). + +### Operators + +The extension method syntax also applies to the definition of operators. +In this case it is allowed and preferable to omit the period between the leading parameter list +and the operator. In each case the definition syntax mirrors the way the operator is applied. +Examples: +```scala +def (x: String) < (y: String) = ... +def (x: Elem) +: (xs: Seq[Elem]) = ... +def (x: Number) min (y: Number) = ... + +"ab" < "c" +1 +: List(2, 3) +x min 3 +``` +The three definitions above translate to +```scala +def < (x: String)(y: String) = ... +def +: (xs: Seq[Elem])(x: Elem) = ... +def min(x: Number)(y: Number) = ... +``` +Note the swap of the two parameters `x` and `xs` when translating +the right-binding operator `+:` to an extension method. This is analogous +to the implementation of right binding operators as normal methods. + +### Generic Extensions + +The `StringSeqOps` examples extended a specific instance of a generic type. It is also possible to extend a generic type by adding type parameters to an extension method. Examples: + +```scala +def [T](xs: List[T]) second = + xs.tail.head + +def [T](xs: List[List[T]]) flattened = + xs.foldLeft[List[T]](Nil)(_ ++ _) + +def [T: Numeric](x: T) + (y: T): T = + summon[Numeric[T]].plus(x, y) +``` + +If an extension method has type parameters, they come immediately after the `def` and are followed by the extended parameter. When calling a generic extension method, any explicitly given type arguments follow the method name. So the `second` method can be instantiated as follows: +```scala +List(1, 2, 3).second[Int] +``` +### Collective Extensions + +A collective extension defines one or more concrete methods that have the same type parameters +and prefix parameter. Examples: + +```scala +extension stringOps of (xs: Seq[String]) with { + def longestStrings: Seq[String] = { + val maxLength = xs.map(_.length).max + xs.filter(_.length == maxLength) + } +} + +extension listOps of [T](xs: List[T]) with { + def second = xs.tail.head + def third: T = xs.tail.tail.head +} + +extension of [T](xs: List[T])(given Ordering[T]) with { + def largest(n: Int) = xs.sorted.takeRight(n) +} +``` +If an extension is anonymous (as in the last clause), its name is synthesized from the name of the first defined extension method. + +The extensions above are equivalent to the following witnesses where the implemented parent is `AnyRef` and the leading parameters are repeated in each extension method definition: +```scala +witness stringOps of AnyRef { + def (xs: Seq[String]).longestStrings: Seq[String] = { + val maxLength = xs.map(_.length).max + xs.filter(_.length == maxLength) + } +} +witness listOps of AnyRef { + def [T](xs: List[T]) second = xs.tail.head + def [T](xs: List[T]) third: T = xs.tail.tail.head +} +witness extension_largest_List_T of AnyRef { + def [T](xs: List[T]) largest (given Ordering[T])(n: Int) = + xs.sorted.takeRight(n) +} +``` + +`extension` and `of` are soft keywords. They can also be used as a regular identifiers. + +### Syntax + +Here are the syntax changes for extension methods and collective extensions relative +to the [current syntax](../../internals/syntax.md). `extension` is a soft keyword, recognized only +in tandem with `of`. It can be used as an identifier everywhere else. +``` +DefSig ::= ... + | ExtParamClause [nl] [‘.’] id DefParamClauses +ExtParamClause ::= [DefTypeParamClause] ‘(’ DefParam ‘)’ +TmplDef ::= ... + | ‘extension’ ExtensionDef +ExtensionDef ::= [id] ‘of’ ExtParamClause {GivenParamClause} ‘with’ ExtMethods +ExtMethods ::= ‘{’ ‘def’ DefDef {semi ‘def’ DefDef} ‘}’ +``` + diff --git a/docs/docs/reference/contextual-witnesses/given-clauses.md b/docs/docs/reference/contextual-witnesses/given-clauses.md new file mode 100644 index 000000000000..79f877ea31b6 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/given-clauses.md @@ -0,0 +1,115 @@ +--- +layout: doc-page +title: "Implicit Parameters" +--- + +Functional programming tends to express most dependencies as simple function parameterization. +This is clean and powerful, but it sometimes leads to functions that take many parameters and +call trees where the same value is passed over and over again in long call chains to many +functions. Implicit parameters can help here since they enable the compiler to synthesize +repetitive arguments instead of the programmer having to write them explicitly. + +For example, with the [witnesses](./witnesses.md) defined previously, +a maximum function that works for any arguments for which an ordering exists can be defined as follows: +```scala +def max[T](x: T, y: T)(given ord: Ord[T]): T = + if (ord.compare(x, y) < 0) y else x +``` +Here, `ord` is an _implicit parameter_ introduced with a `given` clause. +The `max` method can be applied as follows: +```scala +max(2, 3)(given intOrd) +``` +The `(given intOrd)` part passes `intOrd` as an argument for the `ord` parameter. But the point of +implicit parameters is that this argument can also be left out (and it usually is). So the following +applications are equally valid: +```scala +max(2, 3) +max(List(1, 2, 3), Nil) +``` + +## Anonymous Given Clauses + +In many situations, the name of an implicit parameter need not be +mentioned explicitly at all, since it is used only in synthesized arguments for +other implicit parameters. In that case one can avoid defining a parameter name +and just provide its type. Example: +```scala +def maximum[T](xs: List[T])(given Ord[T]): T = + xs.reduceLeft(max) +``` +`maximum` takes an implicit parameter of type `Ord` only to pass it on as an +inferred argument to `max`. The name of the parameter is left out. + +Generally, implicit parameters may be defined either as a full parameter list `(given p_1: T_1, ..., p_n: T_n)` or just as a sequence of types `(given T_1, ..., T_n)`. Vararg implicit parameters are not supported. + +## Inferring Complex Arguments + +Here are two other methods that have an implicit parameter of type `Ord[T]`: +```scala +def descending[T](given asc: Ord[T]): Ord[T] = new Ord[T] { + def compare(x: T, y: T) = asc.compare(y, x) +} + +def minimum[T](xs: List[T])(given Ord[T]) = + maximum(xs)(given descending) +``` +The `minimum` method's right hand side passes `descending` as an explicit argument to `maximum(xs)`. +With this setup, the following calls are all well-formed, and they all normalize to the last one: +```scala +minimum(xs) +maximum(xs)(given descending) +maximum(xs)(given descending(given listOrd)) +maximum(xs)(given descending(given listOrd(given intOrd))) +``` + +## Multiple Given Clauses + +There can be several implicit parameter clauses in a definition and implicit parameter clauses can be freely +mixed with normal ones. Example: +```scala +def f(u: Universe)(given ctx: u.Context)(given s: ctx.Symbol, k: ctx.Kind) = ... +``` +Multiple given clauses are matched left-to-right in applications. Example: +```scala +object global extends Universe { type Context = ... } +witness ctx of global.Context { type Symbol = ...; type Kind = ... } +witness sym of ctx.Symbol +witness kind of ctx.Kind +``` +Then the following calls are all valid (and normalize to the last one) +```scala +f +f(global) +f(global)(given ctx) +f(global)(given ctx)(given sym, kind) +``` +But `f(global)(given sym, kind)` would give a type error. + +## Summoning Instances + +The method `summon` in `Predef` returns the witness of a specific type. For example, +the witness of `Ord[List[Int]]` is produced by +```scala +summon[Ord[List[Int]]] // reduces to listOrd(given intOrd) +``` +The `summon` method is simply defined as the (non-widening) identity function over an implicit parameter. +```scala +def summon[T](given x: T): x.type = x +``` + +## Syntax + +Here is the new syntax of parameters and arguments seen as a delta from the [standard context free syntax of Scala 3](../../internals/syntax.md). +``` +ClsParamClauses ::= ... + | {ClsParamClause} {GivenClsParamClause} +GivenClsParamClause ::= ‘(’ ‘given’ (ClsParams | GivenTypes) ‘)’ +DefParamClauses ::= ... + | {DefParamClause} {GivenParamClause} +GivenParamClause ::= ‘(’ ‘given’ (DefParams | GivenTypes) ‘)’ +GivenTypes ::= AnnotType {‘,’ AnnotType} + +ParArgumentExprs ::= ... + | ‘(’ ‘given’ ExprsInParens ‘)’ +``` diff --git a/docs/docs/reference/contextual-witnesses/implicit-by-name-parameters.md b/docs/docs/reference/contextual-witnesses/implicit-by-name-parameters.md new file mode 100644 index 000000000000..e330ecc0456a --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/implicit-by-name-parameters.md @@ -0,0 +1,67 @@ +--- +layout: doc-page +title: "Implicit By-Name Parameters" +--- + +Implicit parameters can be declared by-name to avoid a divergent inferred expansion. Example: + +```scala +trait Codec[T] { + def write(x: T): Unit +} + +witness intCodec of Codec[Int] = ??? + +witness optionCodec[T](given ev: => Codec[T]) of Codec[Option[T]] { + def write(xo: Option[T]) = xo match { + case Some(x) => ev.write(x) + case None => + } +} + +val s = summon[Codec[Option[Int]]] + +s.write(Some(33)) +s.write(None) +``` +As is the case for a normal by-name parameter, the argument for the implicit parameter `ev` +is evaluated on demand. In the example above, if the option value `x` is `None`, it is +not evaluated at all. + +The synthesized argument for an implicit parameter is backed by a local val +if this is necessary to prevent an otherwise diverging expansion. + +The precise steps for synthesizing an argument for an implicit by-name parameter of type `=> T` are as follows. + + 1. Create a new witness of type `T`: + + ```scala + witness lv of T = ??? + ``` + where `lv` is an arbitrary fresh name. + + 1. This witness is not immediately available as candidate for argument inference (making it immediately available could result in a loop in the synthesized computation). But it becomes available in all nested contexts that look again for an argument to an implicit by-name parameter. + + 1. If this search succeeds with expression `E`, and `E` contains references to `lv`, replace `E` by + + + ```scala + { witness lv of T = E; lv } + ``` + + Otherwise, return `E` unchanged. + +In the example above, the definition of `s` would be expanded as follows. + +```scala +val s = summon[Test.Codec[Option[Int]]]( + optionCodec[Int](intCodec) +) +``` + +No local witness was generated because the synthesized argument is not recursive. + +### Reference + +For more info, see [Issue #1998](https://github.com/lampepfl/dotty/issues/1998) +and the associated [Scala SIP](https://docs.scala-lang.org/sips/byname-implicits.html). diff --git a/docs/docs/reference/contextual-witnesses/implicit-function-types-spec.md b/docs/docs/reference/contextual-witnesses/implicit-function-types-spec.md new file mode 100644 index 000000000000..cda87bd33e54 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/implicit-function-types-spec.md @@ -0,0 +1,77 @@ +--- +layout: doc-page +title: "Implicit Function Types - More Details" +--- + +## Syntax + + Type ::= ... + | FunArgTypes ‘=>’ Typee + FunArgTypes ::= InfixType + | ‘(’ [ ‘[given]’ FunArgType {‘,’ FunArgType } ] ‘)’ + | ‘(’ ‘[given]’ TypedFunParam {‘,’ TypedFunParam } ‘)’ + Bindings ::= ‘(’ [[‘given’] Binding {‘,’ Binding}] ‘)’ + +Implicit function types associate to the right, e.g. +`(given S) => (given T) => U` is the same as `(given S) => ((given T) => U)`. + +## Implementation + +Implicit function types are shorthands for class types that define `apply` +methods with implicit parameters. Specifically, the `N`-ary function type +`T1, ..., TN => R` is a shorthand for the class type +`ImplicitFunctionN[T1 , ... , TN, R]`. Such class types are assumed to have the following definitions, for any value of `N >= 1`: +```scala +package scala +trait ImplicitFunctionN[-T1 , ... , -TN, +R] { + def apply(given x1: T1 , ... , xN: TN): R +} +``` +Implicit function types erase to normal function types, so these classes are +generated on the fly for typechecking, but not realized in actual code. + +Implicit function literals `(given x1: T1, ..., xn: Tn) => e` map +implicit parameters `xi` of types `Ti` to the result of evaluating the expression `e`. +The scope of each implicit parameter `xi` is `e`. The parameters must have pairwise distinct names. + +If the expected type of the implicit function literal is of the form +`scala.ImplicitFunctionN[S1, ..., Sn, R]`, the expected type of `e` is `R` and +the type `Ti` of any of the parameters `xi` can be omitted, in which case `Ti += Si` is assumed. If the expected type of the implicit function literal is +some other type, all implicit parameter types must be explicitly given, and the expected type of `e` is undefined. +The type of the implicit function literal is `scala.ImplicitFunctionN[S1, ...,Sn, T]`, where `T` is the widened +type of `e`. `T` must be equivalent to a type which does not refer to any of +the implicit parameters `xi`. + +The implicit function literal is evaluated as the instance creation +expression +```scala +new scala.ImplicitFunctionN[T1, ..., Tn, T] { + def apply(given x1: T1, ..., xn: Tn): T = e +} +``` +An implicit parameter may also be a wildcard represented by an underscore `_`. In +that case, a fresh name for the parameter is chosen arbitrarily. + +Note: The closing paragraph of the +[Anonymous Functions section](https://www.scala-lang.org/files/archive/spec/2.12/06-expressions.html#anonymous-functions) +of Scala 2.12 is subsumed by implicit function types and should be removed. + +Implicit function literals `(given x1: T1, ..., xn: Tn) => e` are +automatically created for any expression `e` whose expected type is +`scala.ImplicitFunctionN[T1, ..., Tn, R]`, unless `e` is +itself a implicit function literal. This is analogous to the automatic +insertion of `scala.Function0` around expressions in by-name argument position. + +Implicit function types generalize to `N > 22` in the same way that function types do, see [the corresponding +documentation](../dropped-features/limit22.md). + +## Examples + +See the section on Expressiveness from [Simplicitly: foundations and +applications of implicit function +types](https://dl.acm.org/citation.cfm?id=3158130). + +### Type Checking + +After desugaring no additional typing rules are required for implicit function types. diff --git a/docs/docs/reference/contextual-witnesses/implicit-function-types.md b/docs/docs/reference/contextual-witnesses/implicit-function-types.md new file mode 100644 index 000000000000..474e67da64c2 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/implicit-function-types.md @@ -0,0 +1,151 @@ +--- +layout: doc-page +title: "Implicit Function Types" +--- + +_Implicit functions_ are functions with (only) implicit parameters. +Their types are _implicit function types_. Here is an example of an implicit function type: + +```scala +type Executable[T] = (given ExecutionContext) => T +``` +An implicit function is applied to synthesized arguments, in +the same way a method with a given clause is applied. For instance: +```scala + witness ec of ExecutionContext = ... + + def f(x: Int): Executable[Int] = ... + + f(2)(given ec) // explicit argument + f(2) // argument is inferred +``` +Conversely, if the expected type of an expression `E` is an implicit function type +`(given T_1, ..., T_n) => U` and `E` is not already an +implicit function literal, `E` is converted to an implicit function literal by rewriting to +```scala + (given x_1: T1, ..., x_n: Tn) => E +``` +where the names `x_1`, ..., `x_n` are arbitrary. This expansion is performed +before the expression `E` is typechecked, which means that `x_1`, ..., `x_n` +are available as witnesses in `E`. + +Like their types, implicit function literals are written with a `given` parameter clause. +They differ from normal function literals in that their types are implicit function types. + +For example, continuing with the previous definitions, +```scala + def g(arg: Executable[Int]) = ... + + g(22) // is expanded to g((given ev) => 22) + + g(f(2)) // is expanded to g((given ev) => f(2)(given ev)) + + g((given ctx) => f(22)(given ctx)) // is left as it is +``` +### Example: Builder Pattern + +Implicit function types have considerable expressive power. For +instance, here is how they can support the "builder pattern", where +the aim is to construct tables like this: +```scala + table { + row { + cell("top left") + cell("top right") + } + row { + cell("bottom left") + cell("bottom right") + } + } +``` +The idea is to define classes for `Table` and `Row` that allow +addition of elements via `add`: +```scala + class Table { + val rows = new ArrayBuffer[Row] + def add(r: Row): Unit = rows += r + override def toString = rows.mkString("Table(", ", ", ")") + } + + class Row { + val cells = new ArrayBuffer[Cell] + def add(c: Cell): Unit = cells += c + override def toString = cells.mkString("Row(", ", ", ")") + } + + case class Cell(elem: String) +``` +Then, the `table`, `row` and `cell` constructor methods can be defined +with implicit function types as parameters to avoid the plumbing boilerplate +that would otherwise be necessary. +```scala + def table(init: (given Table) => Unit) = { + witness t of Table + init + t + } + + def row(init: (given Row) => Unit)(given t: Table) = { + witness r of Row + init + t.add(r) + } + + def cell(str: String)(given r: Row) = + r.add(new Cell(str)) +``` +With that setup, the table construction code above compiles and expands to: +```scala + table { (given $t: Table) => + row { (given $r: Row) => + cell("top left")(given $r) + cell("top right")(given $r) + } (given $t) + row { (given $r: Row) => + cell("bottom left")(given $r) + cell("bottom right")(given $r) + } (given $t) + } +``` +### Example: Postconditions + +As a larger example, here is a way to define constructs for checking arbitrary postconditions using an extension method `ensuring` so that the checked result can be referred to simply by `result`. The example combines opaque aliases, implicit function types, and extension methods to provide a zero-overhead abstraction. + +```scala +object PostConditions { + opaque type WrappedResult[T] = T + + def result[T](given r: WrappedResult[T]): T = r + + def [T](x: T).ensuring(condition: (given WrappedResult[T]) => Boolean): T = { + assert(condition(given x)) + x + } +} +import PostConditions.{ensuring, result} + +val s = List(1, 2, 3).sum.ensuring(result == 6) +``` +**Explanations**: We use an implicit function type `(given WrappedResult[T]) => Boolean` +as the type of the condition of `ensuring`. An argument to `ensuring` such as +`(result == 6)` will therefore have a witness of type `WrappedResult[T]` in +scope to pass along to the `result` method. `WrappedResult` is a fresh type, to make sure +that we do not get unwanted witnesses in scope (this is good practice in all cases +where implicit parameters are involved). Since `WrappedResult` is an opaque type alias, its +values need not be boxed, and since `ensuring` is added as an extension method, its argument +does not need boxing either. Hence, the implementation of `ensuring` is as about as efficient +as the best possible code one could write by hand: + +```scala +{ val result = List(1, 2, 3).sum + assert(result == 6) + result +} +``` +### Reference + +For more info, see the [blog article](https://www.scala-lang.org/blog/2016/12/07/implicit-function-types.html), +(which uses a different syntax that has been superseded). + +[More details](./implicit-function-types-spec.md) diff --git a/docs/docs/reference/contextual-witnesses/motivation.md b/docs/docs/reference/contextual-witnesses/motivation.md new file mode 100644 index 000000000000..e61db3c53a44 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/motivation.md @@ -0,0 +1,80 @@ +--- +layout: doc-page +title: "Overview" +--- + +### Critique of the Status Quo + +Scala's implicits are its most distinguished feature. They are _the_ fundamental way to abstract over context. They represent a unified paradigm with a great variety of use cases, among them: implementing type classes, establishing context, dependency injection, expressing capabilities, computing new types and proving relationships between them. + +Following Haskell, Scala was the second popular language to have some form of implicits. Other languages have followed suit. E.g Rust's traits or Swift's protocol extensions. Design proposals are also on the table for Kotlin as [compile time dependency resolution](https://github.com/Kotlin/KEEP/blob/e863b25f8b3f2e9b9aaac361c6ee52be31453ee0/proposals/compile-time-dependency-resolution.md), for C# as [Shapes and Extensions](https://github.com/dotnet/csharplang/issues/164) +or for F# as [Traits](https://github.com/MattWindsor91/visualfsharp/blob/hackathon-vs/examples/fsconcepts.md). Implicits are also a common feature of theorem provers such as Coq or Agda. + +Even though these designs use widely different terminology, they are all variants of the core idea of _term inference_. Given a type, the compiler synthesizes a "canonical" term that has that type. Scala embodies the idea in a purer form than most other languages: An implicit parameter directly leads to an inferred argument term that could also be written down explicitly. By contrast, typeclass based designs are less direct since they hide term inference behind some form of type classification and do not offer the option of writing the inferred quantities (typically, dictionaries) explicitly. + +Given that term inference is where the industry is heading, and given that Scala has it in a very pure form, how come implicits are not more popular? In fact, it's fair to say that implicits are at the same time Scala's most distinguished and most controversial feature. I believe this is due to a number of aspects that together make implicits harder to learn than necessary and also make it harder to prevent abuses. + +Particular criticisms are: + +1. Being very powerful, implicits are easily over-used and mis-used. This observation holds in almost all cases when we talk about _implicit conversions_, which, even though conceptually different, share the same syntax with other implicit definitions. For instance, regarding the two definitions + + ```scala + implicit def i1(implicit x: T): C[T] = ... + implicit def i2(x: T): C[T] = ... + ``` + + the first of these is a conditional implicit _value_, the second an implicit _conversion_. Conditional implicit values are a cornerstone for expressing type classes, whereas most applications of implicit conversions have turned out to be of dubious value. The problem is that many newcomers to the language start with defining implicit conversions since they are easy to understand and seem powerful and convenient. Scala 3 will put under a language flag both definitions and applications of "undisciplined" implicit conversions between types defined elsewhere. This is a useful step to push back against overuse of implicit conversions. But the problem remains that syntactically, conversions and values just look too similar for comfort. + + 2. Another widespread abuse is over-reliance on implicit imports. This often leads to inscrutable type errors that go away with the right import incantation, leaving a feeling of frustration. Conversely, it is hard to see what implicits a program uses since implicits can hide anywhere in a long list of imports. + + 3. The syntax of implicit definitions is too minimal. It consists of a single modifier, `implicit`, that can be attached to a large number of language constructs. A problem with this for newcomers is that it conveys mechanism instead of intent. For instance, a typeclass instance is an implicit object or val if unconditional and an implicit def with implicit parameters referring to some class if conditional. This describes precisely what the implicit definitions translate to -- just drop the `implicit` modifier, and that's it! But the cues that define intent are rather indirect and can be easily misread, as demonstrated by the definitions of `i1` and `i2` above. + + 4. The syntax of implicit parameters also has shortcomings. While implicit _parameters_ are designated specifically, arguments are not. Passing an argument to an implicit parameter looks like a regular application `f(arg)`. This is problematic because it means there can be confusion regarding what parameter gets instantiated in a call. For instance, in + ```scala + def currentMap(implicit ctx: Context): Map[String, Int] + ``` + one cannot write `currentMap("abc")` since the string "abc" is taken as explicit argument to the implicit `ctx` parameter. One has to write `currentMap.apply("abc")` instead, which is awkward and irregular. For the same reason, a method definition can only have one implicit parameter section and it must always come last. This restriction not only reduces orthogonality, but also prevents some useful program constructs, such as a method with a regular parameter whose type depends on an implicit value. Finally, it's also a bit annoying that implicit parameters must have a name, even though in many cases that name is never referenced. + + 5. Implicits pose challenges for tooling. The set of available implicits depends on context, so command completion has to take context into account. This is feasible in an IDE but docs like ScalaDoc that are based static web pages can only provide an approximation. Another problem is that failed implicit searches often give very unspecific error messages, in particular if some deeply recursive implicit search has failed. Note that the Dotty compiler already implements some improvements in this case, but challenges still remain. + +None of the shortcomings is fatal, after all implicits are very widely used, and many libraries and applications rely on them. But together, they make code using implicits a lot more cumbersome and less clear than it could be. + +Historically, many of these shortcomings come from the way implicits were gradually "discovered" in Scala. Scala originally had only implicit conversions with the intended use case of "extending" a class or trait after it was defined, i.e. what is expressed by implicit classes in later versions of Scala. Implicit parameters and instance definitions came later in 2006 and we picked similar syntax since it seemed convenient. For the same reason, no effort was made to distinguish implicit imports or arguments from normal ones. + +Existing Scala programmers by and large have gotten used to the status quo and see little need for change. But for newcomers this status quo presents a big hurdle. I believe if we want to overcome that hurdle, we should take a step back and allow ourselves to consider a radically new design. + +### The New Design + +The following pages introduce a redesign of contextual abstractions in Scala. They introduce four fundamental changes: + + 1. [Witnesses](./witnesses.md) are a new way to define basic terms that can be synthesized. They replace implicit definitions. The core principle of the proposal is that, rather than mixing the `implicit` modifier with a large number of features, we have a single way to define terms that can be synthesized for types. + + 2. [Given Clauses](./given-clauses.md) are a new syntax for implicit _parameters_ and their _arguments_. Both are introduced with the same keyword, `given`. This unambiguously aligns parameters and arguments, solving a number of language warts. It also allows us to have several implicit parameter sections, and to have implicit parameters followed by normal ones. + + 3. [Witness Imports](./witness-imports.md) are a new class of imports that specifically import witnesses and nothing else. Witnesses _must be_ imported with witness imports, a plain import will no longer bring them into scope. + + 4. [Implicit Conversions](./conversions.md) are now expressed as witnesses of a standard `Conversion` class. All other forms of implicit conversions will be phased out. + +This section also contains pages describing other language features that are related to context abstraction. These are: + + - [Context Bounds](./context-bounds.md), which carry over unchanged. + - [Extension Methods](./extension-methods.md) replace implicit classes in a way that integrates better with typeclasses. + - [Implementing Typeclasses](./typeclasses.md) demonstrates how some common typeclasses can be implemented using the new constructs. + - [Typeclass Derivation](./derivation.md) introduces constructs to automatically derive typeclass instances for ADTs. + - [Multiversal Equality](./multiversal-equality.md) introduces a special typeclass to support type safe equality. + - [Implicit Function Types](./implicit-function-types.md) provide a way to abstract over given clauses. + - [Implicit By-Name Parameters](./implicit-by-name-parameters.md) are an essential tool to define recursive synthesized values without looping. + - [Relationship with Scala 2 Implicits](./relationship-implicits.md) discusses the relationship between old-style implicits and new-style witnesses and how to migrate from one to the other. + +Overall, the new design achieves a better separation of term inference from the rest of the language: There is a single way to define witnesses instead of a multitude of forms all taking an `implicit` modifier. There is a single way to introduce implicit parameters and arguments instead of conflating implicit with normal arguments. There is a separate way to import witnesses that does not allow them to hide in a sea of normal imports. And there is a single way to define an implicit conversion which is clearly marked as such and does not require special syntax. + +This design thus avoids feature interactions and makes the language more consistent and orthogonal. It will make implicits easier to learn and harder to abuse. It will greatly improve the clarity of the 95% of Scala programs that use implicits. It has thus the potential to fulfil the promise of term inference in a principled way that is also accessible and friendly. + +Could we achieve the same goals by tweaking existing implicits? After having tried for a long time, I believe now that this is impossible. + + - First, some of the problems are clearly syntactic and require different syntax to solve them. + - Second, there is the problem how to migrate. We cannot change the rules in mid-flight. At some stage of language evolution we need to accommodate both the new and the old rules. With a syntax change, this is easy: Introduce the new syntax with new rules, support the old syntax for a while to facilitate cross compilation, deprecate and phase out the old syntax at some later time. Keeping the same syntax does not offer this path, and in fact does not seem to offer any viable path for evolution + - Third, even if we would somehow succeed with migration, we still have the problem + how to teach this. We cannot make existing tutorials go away. Almost all existing tutorials start with implicit conversions, which will go away; they use normal imports, which will go away, and they explain calls to methods with implicit parameters by expanding them to plain applications, which will also go away. This means that we'd have + to add modifications and qualifications to all existing literature and courseware, likely causing more confusion with beginners instead of less. By contrast, with a new syntax there is a clear criterion: Any book or courseware that mentions `implicit` is outdated and should be updated. + diff --git a/docs/docs/reference/contextual-witnesses/multiversal-equality.md b/docs/docs/reference/contextual-witnesses/multiversal-equality.md new file mode 100644 index 000000000000..6928712984bd --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/multiversal-equality.md @@ -0,0 +1,213 @@ +--- +layout: doc-page +title: "Multiversal Equality" +--- + +Previously, Scala had universal equality: Two values of any types +could be compared with each other with `==` and `!=`. This came from +the fact that `==` and `!=` are implemented in terms of Java's +`equals` method, which can also compare values of any two reference +types. + +Universal equality is convenient. But it is also dangerous since it +undermines type safety. For instance, let's assume one is left after some refactoring +with an erroneous program where a value `y` has type `S` instead of the correct type `T`. + +```scala +val x = ... // of type T +val y = ... // of type S, but should be T +x == y // typechecks, will always yield false +``` + +If `y` gets compared to other values of type `T`, +the program will still typecheck, since values of all types can be compared with each other. +But it will probably give unexpected results and fail at runtime. + +Multiversal equality is an opt-in way to make universal equality +safer. It uses a binary typeclass `Eql` to indicate that values of +two given types can be compared with each other. +The example above would not typecheck if `S` or `T` was a class +that derives `Eql`, e.g. +```scala +class T derives Eql +``` +Alternatively, one can also provide an `Eql` witness directly, like this: +```scala +witness of Eql[T, T] = Eql.derived +``` +This definition effectively says that values of type `T` can (only) be +compared to other values of type `T` when using `==` or `!=`. The definition +affects type checking but it has no significance for runtime +behavior, since `==` always maps to `equals` and `!=` always maps to +the negation of `equals`. The right hand side `Eql.derived` of the definition +is a value that has any `Eql` instance as its type. Here is the definition of class +`Eql` and its companion object: +```scala +package scala +import annotation.implicitNotFound + +@implicitNotFound("Values of types ${L} and ${R} cannot be compared with == or !=") +sealed trait Eql[-L, -R] + +object Eql { + object derived extends Eql[Any, Any] +} +``` + +One can have several `Eql` witnesses for a type. For example, the four +definitions below make values of type `A` and type `B` comparable with +each other, but not comparable to anything else: + +```scala +witness of Eql[A, A] = Eql.derived +witness of Eql[B, B] = Eql.derived +witness of Eql[A, B] = Eql.derived +witness of Eql[B, A] = Eql.derived +``` +The `scala.Eql` object defines a number of `Eql` witnesses that together +define a rule book for what standard types can be compared (more details below). + +There's also a "fallback" instance named `eqlAny` that allows comparisons +over all types that do not themselves have an `Eql` witness. `eqlAny` is defined as follows: + +```scala +def eqlAny[L, R]: Eql[L, R] = Eql.derived +``` + +Even though `eqlAny` is not declared a witness, the compiler will still construct an `eqlAny` instance as answer to an implicit search for the +type `Eql[L, R]`, unless `L` or `R` have `Eql` witnesses +defined on them, or the language feature `strictEquality` is enabled + +The primary motivation for having `eqlAny` is backwards compatibility, +if this is of no concern, one can disable `eqlAny` by enabling the language +feature `strictEquality`. As for all language features this can be either +done with an import + +```scala +import scala.language.strictEquality +``` +or with a command line option `-language:strictEquality`. + +## Deriving Eql Witnesses + +Instead of defining `Eql` witnesses directly, it is often more convenient to derive them. Example: +```scala +class Box[T](x: T) derives Eql +``` +By the usual rules of [typeclass derivation](./derivation.md), +this generates the following `Eql` witness in the companion object of `Box`: +```scala +witness [T, U](given Eql[T, U]) of Eql[Box[T], Box[U]] = Eql.derived +``` +That is, two boxes are comparable with `==` or `!=` if their elements are. Examples: +```scala +new Box(1) == new Box(1L) // ok since there is an instance for `Eql[Int, Long]` +new Box(1) == new Box("a") // error: can't compare +new Box(1) == 1 // error: can't compare +``` + +## Precise Rules for Equality Checking + +The precise rules for equality checking are as follows. + +If the `strictEquality` feature is enabled then +a comparison using `x == y` or `x != y` between values `x: T` and `y: U` +is legal if there is a witness for `Eql[T, U]`. + +In the default case where the `strictEquality` feature is not enabled the comparison is +also legal if + + 1. `T` and `U` are the same, or + 2. one of `T`, `U` is a subtype of the _lifted_ version of the other type, or + 3. neither `T` nor `U` have a _reflexive_ `Eql` witness. + +Explanations: + + - _lifting_ a type `S` means replacing all references to abstract types + in covariant positions of `S` by their upper bound, and to replacing + all refinement types in covariant positions of `S` by their parent. + - a type `T` has a _reflexive_ `Eql` witness if the implicit search for `Eql[T, T]` + succeeds. + +## Predefined Eql Instances + +The `Eql` object defines witnesses for comparing + - the primitive types `Byte`, `Short`, `Char`, `Int`, `Long`, `Float`, `Double`, `Boolean`, and `Unit`, + - `java.lang.Number`, `java.lang.Boolean`, and `java.lang.Character`, + - `scala.collection.Seq`, and `scala.collection.Set`. + +Witnesses are defined so that every one of these types has a _reflexive_ `Eql` witness, and the following holds: + + - Primitive numeric types can be compared with each other. + - Primitive numeric types can be compared with subtypes of `java.lang.Number` (and _vice versa_). + - `Boolean` can be compared with `java.lang.Boolean` (and _vice versa_). + - `Char` can be compared with `java.lang.Character` (and _vice versa_). + - Two sequences (of arbitrary subtypes of `scala.collection.Seq`) can be compared + with each other if their element types can be compared. The two sequence types + need not be the same. + - Two sets (of arbitrary subtypes of `scala.collection.Set`) can be compared + with each other if their element types can be compared. The two set types + need not be the same. + - Any subtype of `AnyRef` can be compared with `Null` (and _vice versa_). + +## Why Two Type Parameters? + +One particular feature of the `Eql` type is that it takes _two_ type parameters, representing the types of the two items to be compared. By contrast, conventional +implementations of an equality type class take only a single type parameter which represents the common type of _both_ operands. One type parameter is simpler than two, so why go through the additional complication? The reason has to do with the fact that, rather than coming up with a type class where no operation existed before, +we are dealing with a refinement of pre-existing, universal equality. It's best illustrated through an example. + +Say you want to come up with a safe version of the `contains` method on `List[T]`. The original definition of `contains` in the standard library was: +```scala +class List[+T] { + ... + def contains(x: Any): Boolean +} +``` +That uses universal equality in an unsafe way since it permits arguments of any type to be compared with the list's elements. The "obvious" alternative definition +```scala + def contains(x: T): Boolean +``` +does not work, since it refers to the covariant parameter `T` in a nonvariant context. The only variance-correct way to use the type parameter `T` in `contains` is as a lower bound: +```scala + def contains[U >: T](x: U): Boolean +``` +This generic version of `contains` is the one used in the current (Scala 2.13) version of `List`. +It looks different but it admits exactly the same applications as the `contains(x: Any)` definition we started with. +However, we can make it more useful (i.e. restrictive) by adding an `Eql` parameter: +```scala + def contains[U >: T](x: U)(given Eql[T, U]): Boolean // (1) +``` +This version of `contains` is equality-safe! More precisely, given +`x: T`, `xs: List[T]` and `y: U`, then `xs.contains(y)` is type-correct if and only if +`x == y` is type-correct. + +Unfortunately, the crucial ability to "lift" equality type checking from simple equality and pattern matching to arbitrary user-defined operations gets lost if we restrict ourselves to an equality class with a single type parameter. Consider the following signature of `contains` with a hypothetical `Eql1[T]` type class: +```scala + def contains[U >: T](x: U)(given Eql1[U]): Boolean // (2) +``` +This version could be applied just as widely as the original `contains(x: Any)` method, +since the `Eql1[Any]` fallback is always available! So we have gained nothing. What got lost in the transition to a single parameter type class was the original rule that `Eql[A, B]` is available only if neither `A` nor `B` have a reflexive `Eql` witness. That rule simply cannot be expressed if there is a single type parameter for `Eql`. + +The situation is different under `-language:strictEquality`. In that case, +the `Eql[Any, Any]` or `Eql1[Any]` witnesses would never be available, and the +single and two-parameter versions would indeed coincide for most practical purposes. + +But assuming `-language:strictEquality` immediately and everywhere poses migration problems which might well be unsurmountable. Consider again `contains`, which is in the standard library. Parameterizing it with the `Eql` type class as in (1) is an immediate win since it rules out non-sensical applications while still allowing all sensible ones. +So it can be done almost at any time, modulo binary compatibility concerns. +On the other hand, parameterizing `contains` with `Eql1` as in (2) would make `contains` +unusable for all types that have not yet declared an `Eql1` witness, including all +types coming from Java. This is clearly unacceptable. It would lead to a situation where, +rather than migrating existing libraries to use safe equality, the only upgrade path is to have parallel libraries, with the new version only catering to types deriving `Eql1` and the old version dealing with everything else. Such a split of the ecosystem would be very problematic, which means the cure is likely to be worse than the disease. + +For these reasons, it looks like a two-parameter type class is the only way forward because it can take the existing ecosystem where it is and migrate it towards a future where more and more code uses safe equality. + +In applications where `-language:strictEquality` is the default one could also introduce a one-parameter type alias such as +```scala +type Eq[-T] = Eql[T, T] +``` +Operations needing safe equality could then use this alias instead of the two-parameter `Eql` class. But it would only +work under `-language:strictEquality`, since otherwise the universal `Eq[Any]` instance would be available everywhere. + + +More on multiversal equality is found in a [blog post](http://www.scala-lang.org/blog/2016/05/06/multiversal-equality.html) +and a [Github issue](https://github.com/lampepfl/dotty/issues/1247). diff --git a/docs/docs/reference/contextual-witnesses/relationship-implicits.md b/docs/docs/reference/contextual-witnesses/relationship-implicits.md new file mode 100644 index 000000000000..34dc521cadd0 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/relationship-implicits.md @@ -0,0 +1,189 @@ +--- +layout: doc-page +title: Relationship with Scala 2 Implicits +--- + +Many, but not all, of the new contextual abstraction features in Scala 3 can be mapped to Scala 2's implicits. This page gives a rundown on the relationships between new and old features. + +## Simulating Scala 3 Contextual Abstraction Concepts with Scala 2 Implicits + +### Witnesses + +Witnesses can be mapped to combinations of implicit objects, classes and implicit methods. + + 1. Witnesses without parameters are mapped to implicit objects. E.g., + ```scala + witness intOrd of Ord[Int] { ... } + ``` + maps to + ```scala + implicit object IntOrd extends Ord[Int] { ... } + ``` + 2. Parameterized witnesses are mapped to combinations of classes and implicit methods. E.g., + ```scala + witness listOrd[T](given ord: Ord[T]) of Ord[List[T]] { ... } + ``` + maps to + ```scala + class ListOrd[T](implicit ord: Ord[T]) extends Ord[List[T]] { ... } + final implicit def ListOrd[T](implicit ord: Ord[T]): ListOrd[T] = new ListOrd[T] + ``` + 3. Alias witnesses map to implicit methods or implicit lazy vals. If an alias has neither type nor implicit parameters, + it is treated as a lazy val, unless the right hand side is a simple reference, in which case we can use a forwarder to + that reference without caching it. + +Examples: +```scala +witness global of ExecutionContext = new ForkJoinContext() + +val ctx: Context +witness of Context = ctx +``` +would map to +```scala +final implicit lazy val global: ExecutionContext = new ForkJoinContext() +final implicit def witness_Context = ctx +``` + +### Anonymous Witnesses + +Anonymous witnesses get compiler synthesized names, which are generated in a reproducible way from the implemented type(s). For example, if the names of the `IntOrd` and `ListOrd` witnesses above were left out, the following names would be synthesized instead: +```scala +witness witness_Ord_Int of Ord[Int] { ... } +witness witness_Ord_List[T] of Ord[List[T]] { ... } +``` +The synthesized type names are formed from + + - the prefix `witness_`, + - the simple name(s) of the implemented type(s), leaving out any prefixes, + - the simple name(s) of the toplevel argument type constructors to these types. + +Tuples are treated as transparent, i.e. a type `F[(X, Y)]` would get the synthesized name +`F_X_Y`. Directly implemented function types `A => B` are represented as `A_to_B`. Function types used as arguments to other type constructors are represented as `Function`. + +### Anonymous Collective Extensions + +Anonymous collective extensions also get compiler synthesized names, which are formed from + + - the prefix `extension_` + - the name of the first defined extension method + - the simple name of the first parameter type of this extension method + - the simple name(s) of the toplevel argument type constructors to this type. + +For example, the extension +```scala +extension of [T] (xs: List[T]) with { + def second = ... +} +``` +gets the synthesized name `extension_second_List_T`. + +### Given Clauses + +Given clauses correspond largely to Scala-2's implicit parameter clauses. E.g. +```scala +def max[T](x: T, y: T)(given ord: Ord[T]): T +``` +would be written +```scala +def max[T](x: T, y: T)(implicit ord: Ord[T]): T +``` +in Scala 2. The main difference concerns applications of such parameters. +Explicit arguments to parameters of given clauses _must_ be written using `given`, +mirroring the definition syntax. E.g, `max(2, 3)(given IntOrd`). +Scala 2 uses normal applications `max(2, 3)(IntOrd)` instead. The Scala 2 syntax has some inherent ambiguities and restrictions which are overcome by the new syntax. For instance, multiple implicit parameter lists are not available in the old syntax, even though they can be simulated using auxiliary objects in the "Aux" pattern. + +The `summon` method corresponds to `implicitly` in Scala 2. +It is precisely the same as the `the` method in Shapeless. +The difference between `summon` (or `the`) and `implicitly` is +that `summon` can return a more precise type than the type that was +asked for. + +### Context Bounds + +Context bounds are the same in both language versions. They expand to the respective forms of implicit parameters. + +**Note:** To ease migration, context bounds in Dotty map for a limited time to old-style implicit parameters for which arguments can be passed either with `given` or +with a normal application. Once old-style implicits are deprecated, context bounds +will map to given clauses instead. + +### Extension Methods + +Extension methods have no direct counterpart in Scala 2, but they can be simulated with implicit classes. For instance, the extension method +```scala +def (c: Circle).circumference: Double = c.radius * math.Pi * 2 +``` +could be simulated to some degree by +```scala +implicit class CircleDecorator(c: Circle) extends AnyVal { + def circumference: Double = c.radius * math.Pi * 2 +} +``` +Abstract extension methods in traits that are implemented in witnesses have no direct counterpart in Scala-2. The only way to simulate these is to make implicit classes available through imports. The Simulacrum macro library can automate this process in some cases. + +### Typeclass Derivation + +Typeclass derivation has no direct counterpart in the Scala 2 language. Comparable functionality can be achieved by macro-based libraries such as Shapeless, Magnolia, or scalaz-deriving. + +### Implicit Function Types + +Implicit function types have no analogue in Scala 2. + +### Implicit By-Name Parameters + +Implicit by-name parameters are not supported in Scala 2, but can be emulated to some degree by the `Lazy` type in Shapeless. + +## Simulating Scala 2 Implicits in Scala 3 + +### Implicit Conversions + +Implicit conversion methods in Scala 2 can be expressed as witnesses of the `scala.Conversion` class in Dotty. E.g. instead of +```scala +implicit def stringToToken(str: String): Token = new Keyword(str) +``` +one can write +```scala +witness stringToToken of Conversion[String, Token] { + def apply(str: String): Token = KeyWord(str) +} +``` +or +```scala +witness stringToToken of Conversion[String, Token] = KeyWord(_) +``` + +### Implicit Classes + +Implicit classes in Scala 2 are often used to define extension methods, which are directly supported in Dotty. Other uses of implicit classes can be simulated by a pair of a regular class and a witness of `Conversion` type. + +### Implicit Values + +Implicit `val` definitions in Scala 2 can be expressed in Dotty using a regular `val` definition and an alias witness. +E.g., Scala 2's +```scala +lazy implicit val pos: Position = tree.sourcePos +``` +can be expressed in Dotty as +```scala +lazy val pos: Position = tree.sourcePos +witness of Position = pos +``` + +### Abstract Implicits + +An abstract implicit `val` or `def` in Scala 2 can be expressed in Dotty using a regular abstract definition and an alias witness. E.g., Scala 2's +```scala +implicit def symDecorator: SymDecorator +``` +can be expressed in Dotty as +```scala +def symDecorator: SymDecorator +witness of SymDecorator = symDecorator +``` + +## Implementation Status and Timeline + +The Dotty implementation implements both Scala-2's implicits and the new abstractions. In fact, support for Scala-2's implicits is an essential part of the common language subset between 2.13/2.14 and Dotty. +Migration to the new abstractions will be supported by making automatic rewritings available. + +Depending on adoption patterns, old style implicits might start to be deprecated in a version following Scala 3.0. diff --git a/docs/docs/reference/contextual-witnesses/typeclasses.md b/docs/docs/reference/contextual-witnesses/typeclasses.md new file mode 100644 index 000000000000..1df5dfd55921 --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/typeclasses.md @@ -0,0 +1,66 @@ +--- +layout: doc-page +title: "Implementing Typeclasses" +--- + +Given instances, extension methods and context bounds +allow a concise and natural expression of _typeclasses_. Typeclasses are just traits +with canonical implementations defined by given instances. Here are some examples of standard typeclasses: + +### Semigroups and monoids: + +```scala +trait SemiGroup[T] { + @infix def (x: T) combine (y: T): T +} + +trait Monoid[T] extends SemiGroup[T] { + def unit: T +} + +object Monoid { + def apply[T](given m: Monoid[T]) = m +} + +witness of Monoid[String] { + def (x: String) combine (y: String): String = x.concat(y) + def unit: String = "" +} + +witness of Monoid[Int] { + def (x: Int) combine (y: Int): Int = x + y + def unit: Int = 0 +} + +def sum[T: Monoid](xs: List[T]): T = + xs.foldLeft(Monoid[T].unit)(_ combine _) +``` + +### Functors and monads: + +```scala +trait Functor[F[_]] { + def [A, B](x: F[A]).map(f: A => B): F[B] +} + +trait Monad[F[_]] extends Functor[F] { + def [A, B](x: F[A]).flatMap(f: A => F[B]): F[B] + def [A, B](x: F[A]).map(f: A => B) = x.flatMap(f `andThen` pure) + + def pure[A](x: A): F[A] +} + +witness listMonad of Monad[List] { + def [A, B](xs: List[A]).flatMap(f: A => List[B]): List[B] = + xs.flatMap(f) + def pure[A](x: A): List[A] = + List(x) +} + +witness readerMonad[Ctx] of Monad[[X] =>> Ctx => X] { + def [A, B](r: Ctx => A).flatMap(f: A => Ctx => B): Ctx => B = + ctx => f(r(ctx))(ctx) + def pure[A](x: A): Ctx => A = + ctx => x +} +``` diff --git a/docs/docs/reference/contextual-witnesses/witness-imports.md b/docs/docs/reference/contextual-witnesses/witness-imports.md new file mode 100644 index 000000000000..61c8f4b07cfc --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/witness-imports.md @@ -0,0 +1,118 @@ +--- +layout: doc-page +title: "Witness Imports" +--- + +A special form of import wildcard selector is used to import witnesses. Example: +```scala +object A { + class TC + witness tc of TC + def f(given TC) = ??? +} +object B { + import A._ + import A.{given _} +} +``` +In the code above, the `import A._` clause of object `B` will import all members +of `A` _except_ the witness `tc`. Conversely, the second import `import A.{given _}` +will import _only_ that witness. The two import clauses can also be merged into one: +```scala +object B + import A.{given _, _} +``` + +Generally, a normal wildcard selector `_` brings all definitions other than witnesses or extensions into scope +whereas a `given _` selector brings all witnesses (including those resulting from extensions) into scope. + +There are two main benefits arising from these rules: + + - It is made clearer where witnesses in scope are coming from. + In particular, it is not possible to hide imported witnesses in a long list of regular wildcard imports. + - It enables importing all witnesses + without importing anything else. This is particularly important since witnesses + can be anonymous, so the usual recourse of using named imports is not + practical. + +### Importing By Type + +Since witnesses can be anonymous it is not always practical to import them by their name, and wildcard imports are typically used instead. By-type imports provide a more specific alternative to wildcard imports, which makes it clearer what is imported. Example: + +```scala +import A.{given TC} +``` +This imports any witness in `A` that has a type which conforms to `TC`. Importing witnesses of several types `T1,...,Tn` +is expressed by multiple `witness` selectors. +``` +import A.{given T1, ..., given Tn} +``` +Importing all witnesses of a parameterized type is expressed by wildcard arguments. +For instance, assuming the object +```scala +object Instances { + witness intOrd of Ordering[Int] + witness [T: Ordering] listOrd of Ordering[List[T]] + witness ec of ExecutionContext = ... + witness im of Monoid[Int] +} +``` +the import +```scala +import Instances.{given Ordering[?], given ExecutionContext} +``` +would import the `intOrd`, `listOrd`, and `ec` instances but leave out the `im` instance, since it fits none of the specified bounds. + +By-type imports can be mixed with by-name imports. If both are present in an import clause, by-type imports come last. For instance, the import clause +```scala +import Instances.{im, given Ordering[?]} +``` +would import `im`, `intOrd`, and `listOrd` but leave out `ec`. + + + +### Migration + +The rules for imports stated above have the consequence that a library +would have to migrate in lockstep with all its users from old style implicits and +normal imports to witnesses and witness imports. + +The following modifications avoid this hurdle to migration. + + 1. A `given` import selector also brings old style implicits into scope. So, in Scala 3.0 + an old-style implicit definition can be brought into scope either by a `_` or a `given _` wildcard selector. + + 2. In Scala 3.1, old-style implicits accessed through a `_` wildcard import will give a deprecation warning. + + 3. In some version after 3.1, old-style implicits accessed through a `_` wildcard import will give a compiler error. + +These rules mean that library users can use `given _` selectors to access old-style implicits in Scala 3.0, +and will be gently nudged and then forced to do so in later versions. Libraries can then switch to +representation clauses once their user base has migrated. + +### Syntax + +``` +Import ::= ‘import’ ImportExpr {‘,’ ImportExpr} +ImportExpr ::= StableId ‘.’ ImportSpec +ImportSpec ::= id + | ‘_’ + | ‘{’ ImportSelectors) ‘}’ +ImportSelectors ::= id [‘=>’ id | ‘=>’ ‘_’] [‘,’ ImportSelectors] + | WildCardSelector {‘,’ WildCardSelector} +WildCardSelector ::= ‘given’ (‘_' | InfixType) + | ‘_' +Export ::= ‘export’ ImportExpr {‘,’ ImportExpr} +``` \ No newline at end of file diff --git a/docs/docs/reference/contextual-witnesses/witnesses.md b/docs/docs/reference/contextual-witnesses/witnesses.md new file mode 100644 index 000000000000..ab566c00df8d --- /dev/null +++ b/docs/docs/reference/contextual-witnesses/witnesses.md @@ -0,0 +1,88 @@ +--- +layout: doc-page +title: "Witnesses" +--- + +Witnesses define "canonical" values of certain types +that serve for synthesizing arguments to [implicit parameters](./given-clauses.md). Example: + +```scala +trait Ord[T] { + def compare(x: T, y: T): Int + def (x: T) < (y: T) = compare(x, y) < 0 + def (x: T) > (y: T) = compare(x, y) > 0 +} + +witness intOrd of Ord[Int] { + def compare(x: Int, y: Int) = + if (x < y) -1 else if (x > y) +1 else 0 +} + +witness listOrd[T](given ord: Ord[T]) of Ord[List[T]] { + + def compare(xs: List[T], ys: List[T]): Int = (xs, ys) match + case (Nil, Nil) => 0 + case (Nil, _) => -1 + case (_, Nil) => +1 + case (x :: xs1, y :: ys1) => + val fst = ord.compare(x, y) + if (fst != 0) fst else compare(xs1, ys1) +} +``` +This code defines a trait `Ord` with two witnesses. `intOrd` defines +a witness of the type `Ord[Int]` whereas `listOrd[T]` defines witnesses +of `Ord[List[T]]` for all types `T` that come with a witness of `Ord[T]` +themselves. The `(given ord: Ord[T])` clause in `listOrd` defines a condition: There must be a +witness of type `Ord[T]` so that a witness of type `List[Ord[T]]` can +be synthesized. Such conditions are expanded by the compiler to implicit +parameters, which are explained in the [next section](./given-clauses.md). + +## Anonymous Witnesses + +The name of a witness can be left out. So the definitions +of the last section can also be expressed like this: +```scala +witness of Ord[Int] { ... } +witness [T](given Ord[T]) of Ord[List[T]] { ... } +``` +If the name of a witness is missing, the compiler will synthesize a name from +the implemented type(s). + +## Alias Witnesses + +An alias can be used to define a witness that is equal to some expression. E.g.: +```scala +witness global of ExecutionContext = new ForkJoinPool() +``` +This creates a witness `global` of type `ExecutionContext` that resolves to the right +hand side `new ForkJoinPool()`. +The first time `global` is accessed, a new `ForkJoinPool` is created, which is then +returned for this and all subsequent accesses to `global`. + +Alias witnesses can be anonymous, e.g. +```scala +witness of Position = enclosingTree.position +witness (given outer: Context) of Context = outer.withOwner(currentOwner) +``` +An alias witness can have type parameters and implicit parameters just like any other witness, +but it can only implement a single type. + +## Witness Initialization + +A witness without type or implicit parameters is initialized on-demand, the first +time it is accessed. If a witness has type or implicit parameters, a fresh instance +is created for each reference. + +## Syntax + +Here is the new syntax for witnesses, seen as a delta from the [standard context free syntax of Scala 3](../../internals/syntax.md). +``` +TmplDef ::= ... + | ‘witness’ WitnessDef +WitnessDef ::= WitnessSig ‘of’ [‘_’ ‘<:’] Type ‘=’ Expr + | WitnessSig ‘of’ [ConstrApp {‘,’ ConstrApp }] [TemplateBody] +WitnessSig ::= [id] [DefTypeParamClause] {GivenParamClause} +GivenParamClause ::= ‘(’ ‘given’ (DefParams | GivenTypes) ‘)’ +GivenTypes ::= Type {‘,’ Type} +``` +The identifier `id` can be omitted only if some types are implemented or the template body defines at least one extension method.