You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Array](https://www.scala-lang.org/api/{{ site.scala-version }}/scala/Array.html) is a special kind of collection in Scala. On the one hand, Scala arrays correspond one-to-one to Java arrays. That is, a Scala array `Array[Int]` is represented as a Java `int[]`, an `Array[Double]` is represented as a Java `double[]` and a `Array[String]` is represented as a Java `String[]`. But at the same time, Scala arrays offer much more than their Java analogues. First, Scala arrays can be _generic_. That is, you can have an `Array[T]`, where `T` is a type parameter or abstract type. Second, Scala arrays are compatible with Scala sequences - you can pass an `Array[T]` where a `Seq[T]` is required. Finally, Scala arrays also support all sequence operations. Here's an example of this in action:
16
16
17
-
scala> val a1 = Array(1, 2, 3)
18
-
a1: Array[Int] = Array(1, 2, 3)
19
-
scala> val a2 = a1 map (_ * 3)
20
-
a2: Array[Int] = Array(3, 6, 9)
21
-
scala> val a3 = a2 filter (_ % 2 != 0)
22
-
a3: Array[Int] = Array(3, 9)
23
-
scala> a3.reverse
24
-
res0: Array[Int] = Array(9, 3)
17
+
{% tabs arrays_1 %}
18
+
{% tab 'Scala 2 and 3' for=arrays_1 %}
19
+
```scala
20
+
scala>vala1=Array(1, 2, 3)
21
+
vala1:Array[Int] =Array(1, 2, 3)
22
+
23
+
scala>vala2= a1.map(_ *3)
24
+
vala2:Array[Int] =Array(3, 6, 9)
25
+
26
+
scala>vala3= a2.filter(_ %2!=0)
27
+
vala3:Array[Int] =Array(3, 9)
28
+
29
+
scala> a3.reverse
30
+
valres0:Array[Int] =Array(9, 3)
31
+
```
32
+
{% endtab %}
33
+
{% endtabs %}
25
34
26
35
Given that Scala arrays are represented just like Java arrays, how can these additional features be supported in Scala? The Scala array implementation makes systematic use of implicit conversions. In Scala, an array does not pretend to _be_ a sequence. It can't really be that because the data type representation of a native array is not a subtype of `Seq`. Instead there is an implicit "wrapping" conversion between arrays and instances of class `scala.collection.mutable.ArraySeq`, which is a subclass of `Seq`. Here you see it in action:
The interaction above demonstrates that arrays are compatible with sequences, because there's an implicit conversion from arrays to `ArraySeq`s. To go the other way, from an `ArraySeq` to an `Array`, you can use the `toArray` method defined in `Iterable`. The last REPL line above shows that wrapping and then unwrapping with `toArray` produces a copy of the original array.
36
53
37
54
There is yet another implicit conversion that gets applied to arrays. This conversion simply "adds" all sequence methods to arrays but does not turn the array itself into a sequence. "Adding" means that the array is wrapped in another object of type `ArrayOps` which supports all sequence methods. Typically, this `ArrayOps` object is short-lived; it will usually be inaccessible after the call to the sequence method and its storage can be recycled. Modern VMs often avoid creating this object entirely.
38
55
39
56
The difference between the two implicit conversions on arrays is shown in the next REPL dialogue:
You see that calling reverse on `seq`, which is an `ArraySeq`, will give again a `ArraySeq`. That's logical, because arrayseqs are `Seqs`, and calling reverse on any `Seq` will give again a `Seq`. On the other hand, calling reverse on the ops value of class `ArrayOps` will give an `Array`, not a `Seq`.
51
77
52
78
The `ArrayOps` example above was quite artificial, intended only to show the difference to `ArraySeq`. Normally, you'd never define a value of class `ArrayOps`. You'd just call a `Seq` method on an array:
53
79
54
-
scala> a1.reverse
55
-
res4: Array[Int] = Array(3, 2, 1)
80
+
{% tabs arrays_4 %}
81
+
{% tab 'Scala 2 and 3' for=arrays_4 %}
82
+
```scala
83
+
scala> a1.reverse
84
+
valres4:Array[Int] =Array(3, 2, 1)
85
+
```
86
+
{% endtab %}
87
+
{% endtabs %}
56
88
57
89
The `ArrayOps` object gets inserted automatically by the implicit conversion. So the line above is equivalent to
58
90
59
-
scala> intArrayOps(a1).reverse
60
-
res5: Array[Int] = Array(3, 2, 1)
91
+
{% tabs arrays_5 %}
92
+
{% tab 'Scala 2 and 3' for=arrays_5 %}
93
+
```scala
94
+
scala> intArrayOps(a1).reverse
95
+
valres5:Array[Int] =Array(3, 2, 1)
96
+
```
97
+
{% endtab %}
98
+
{% endtabs %}
61
99
62
100
where `intArrayOps` is the implicit conversion that was inserted previously. This raises the question of how the compiler picked `intArrayOps` over the other implicit conversion to `ArraySeq` in the line above. After all, both conversions map an array to a type that supports a reverse method, which is what the input specified. The answer to that question is that the two implicit conversions are prioritized. The `ArrayOps` conversion has a higher priority than the `ArraySeq` conversion. The first is defined in the `Predef` object whereas the second is defined in a class `scala.LowPriorityImplicits`, which is inherited by `Predef`. Implicits in subclasses and subobjects take precedence over implicits in base classes. So if both conversions are applicable, the one in `Predef` is chosen. A very similar scheme works for strings.
63
101
64
102
So now you know how arrays can be compatible with sequences and how they can support all sequence operations. What about genericity? In Java, you cannot write a `T[]` where `T` is a type parameter. How then is Scala's `Array[T]` represented? In fact a generic array like `Array[T]` could be at run-time any of Java's eight primitive array types `byte[]`, `short[]`, `char[]`, `int[]`, `long[]`, `float[]`, `double[]`, `boolean[]`, or it could be an array of objects. The only common run-time type encompassing all of these types is `AnyRef` (or, equivalently `java.lang.Object`), so that's the type to which the Scala compiler maps `Array[T]`. At run-time, when an element of an array of type `Array[T]` is accessed or updated there is a sequence of type tests that determine the actual array type, followed by the correct array operation on the Java array. These type tests slow down array operations somewhat. You can expect accesses to generic arrays to be three to four times slower than accesses to primitive or object arrays. This means that if you need maximal performance, you should prefer concrete to generic arrays. Representing the generic array type is not enough, however, there must also be a way to create generic arrays. This is an even harder problem, which requires a little of help from you. To illustrate the issue, consider the following attempt to write a generic method that creates an array.
65
103
66
-
// this is wrong!
67
-
def evenElems[T](xs: Vector[T]): Array[T] = {
68
-
val arr = new Array[T]((xs.length + 1) / 2)
69
-
for (i <- 0 until xs.length by 2)
70
-
arr(i / 2) = xs(i)
71
-
arr
72
-
}
104
+
{% tabs arrays_6 class=tabs-scala-version %}
105
+
{% tab 'Scala 2' for=arrays_6 %}
106
+
```scala mdoc:fail
107
+
// this is wrong!
108
+
defevenElems[T](xs: Vector[T]):Array[T] = {
109
+
valarr=newArray[T]((xs.length +1) /2)
110
+
for (i <-0 until xs.length by 2)
111
+
arr(i /2) = xs(i)
112
+
arr
113
+
}
114
+
```
115
+
{% endtab %}
116
+
{% tab 'Scala 3' for=arrays_6 %}
117
+
```scala
118
+
// this is wrong!
119
+
defevenElems[T](xs: Vector[T]):Array[T] =
120
+
valarr=newArray[T]((xs.length +1) /2)
121
+
for i <-0 until xs.length by 2do
122
+
arr(i /2) = xs(i)
123
+
arr
124
+
```
125
+
{% endtab %}
126
+
{% endtabs %}
73
127
74
128
The `evenElems` method returns a new array that consist of all elements of the argument vector `xs` which are at even positions in the vector. The first line of the body of `evenElems` creates the result array, which has the same element type as the argument. So depending on the actual type parameter for `T`, this could be an `Array[Int]`, or an `Array[Boolean]`, or an array of some other primitive types in Java, or an array of some reference type. But these types have all different runtime representations, so how is the Scala runtime going to pick the correct one? In fact, it can't do that based on the information it is given, because the actual type that corresponds to the type parameter `T` is erased at runtime. That's why you will get the following error message if you compile the code above:
75
129
76
-
error: cannot find class manifest for element type T
What's required here is that you help the compiler out by providing some runtime hint what the actual type parameter of `evenElems` is. This runtime hint takes the form of a class manifest of type `scala.reflect.ClassTag`. A class manifest is a type descriptor object which describes what the top-level class of a type is. Alternatively to class manifests there are also full manifests of type `scala.reflect.Manifest`, which describe all aspects of a type. But for array creation, only class manifests are needed.
81
149
82
150
The Scala compiler will construct class manifests automatically if you instruct it to do so. "Instructing" means that you demand a class manifest as an implicit parameter, like this:
Using an alternative and shorter syntax, you can also demand that the type comes with a class manifest by using a context bound. This means following the type with a colon and the class name `ClassTag`, like this:
The two revised versions of `evenElems` mean exactly the same. What happens in either case is that when the `Array[T]` is constructed, the compiler will look for a class manifest for the type parameter T, that is, it will look for an implicit value of type `ClassTag[T]`. If such a value is found, the manifest is used to construct the right kind of array. Otherwise, you'll see an error message like the one above.
98
194
99
195
Here is some REPL interaction that uses the `evenElems` method.
valres7:Array[java.lang.String] =Array(this, a, run)
205
+
```
206
+
{% endtab %}
207
+
{% endtabs %}
105
208
106
209
In both cases, the Scala compiler automatically constructed a class manifest for the element type (first, `Int`, then `String`) and passed it to the implicit parameter of the `evenElems` method. The compiler can do that for all concrete types, but not if the argument is itself another type parameter without its class manifest. For instance, the following fails:
What happened here is that the `evenElems` demands a class manifest for the type parameter `U`, but none was found. The solution in this case is, of course, to demand another implicit class manifest for `U`. So the following works:
This example also shows that the context bound in the definition of `U` is just a shorthand for an implicit parameter named here `evidence$1` of type `ClassTag[U]`.
Copy file name to clipboardExpand all lines: _overviews/collections-2.13/equality.md
+26-13Lines changed: 26 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -16,19 +16,32 @@ The collection libraries have a uniform approach to equality and hashing. The id
16
16
17
17
It does not matter for the equality check whether a collection is mutable or immutable. For a mutable collection one simply considers its current elements at the time the equality test is performed. This means that a mutable collection might be equal to different collections at different times, depending on what elements are added or removed. This is a potential trap when using a mutable collection as a key in a hashmap. Example:
In this example, the selection in the last line will most likely fail because the hash-code of the array `buf` has changed in the second-to-last line. Therefore, the hash-code-based lookup will look at a different place than the one where `buf` was stored.
Like arrays, strings are not directly sequences, but they can be converted to them, and they also support all sequence operations on strings. Here are some examples of operations you can invoke on strings.
16
16
17
-
scala> val str = "hello"
18
-
str: java.lang.String = hello
19
-
scala> str.reverse
20
-
res6: String = olleh
21
-
scala> str.map(_.toUpper)
22
-
res7: String = HELLO
23
-
scala> str drop 3
24
-
res8: String = lo
25
-
scala> str.slice(1, 4)
26
-
res9: String = ell
27
-
scala> val s: Seq[Char] = str
28
-
s: Seq[Char] = hello
17
+
{% tabs strings_1 %}
18
+
{% tab 'Scala 2 and 3' for=strings_1 %}
19
+
20
+
```scala
21
+
scala>valstr="hello"
22
+
valstr: java.lang.String= hello
23
+
24
+
scala> str.reverse
25
+
valres6:String= olleh
26
+
27
+
scala> str.map(_.toUpper)
28
+
valres7:String=HELLO
29
+
30
+
scala> str.drop(3)
31
+
valres8:String= lo
32
+
33
+
scala> str.slice(1, 4)
34
+
valres9:String= ell
35
+
36
+
scala>vals:Seq[Char] = str
37
+
vals:Seq[Char] = hello
38
+
```
39
+
40
+
{% endtab %}
41
+
{% endtabs %}
29
42
30
43
These operations are supported by two implicit conversions. The first, low-priority conversion maps a `String` to a `WrappedString`, which is a subclass of `immutable.IndexedSeq`, This conversion got applied in the last line above where a string got converted into a Seq. The other, high-priority conversion maps a string to a `StringOps` object, which adds all methods on immutable sequences to strings. This conversion was implicitly inserted in the method calls of `reverse`, `map`, `drop`, and `slice` in the example above.
Note that `lazyMap` constructs a new `Iterable` without stepping through all elements of the given collection `coll`. The given function `f` is instead applied to the elements of the new collection's `iterator` as they are demanded.
26
38
@@ -30,42 +42,103 @@ To go from a collection to its view, you can use the `view` method on the collec
30
42
31
43
Let's see an example. Say you have a vector of Ints over which you want to map two functions in succession:
32
44
33
-
scala> val v = Vector(1 to 10: _*)
34
-
v: scala.collection.immutable.Vector[Int] =
35
-
Vector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
36
-
scala> v map (_ + 1) map (_ * 2)
37
-
res5: scala.collection.immutable.Vector[Int] =
38
-
Vector(4, 6, 8, 10, 12, 14, 16, 18, 20, 22)
45
+
{% tabs views_2 class=tabs-scala-version %}
46
+
{% tab 'Scala 2' for=views_2 %}
47
+
48
+
```scala
49
+
scala>valv=Vector(1 to 10:_*)
50
+
valv: scala.collection.immutable.Vector[Int] =
51
+
Vector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
52
+
53
+
scala> v.map(_ +1).map(_ *2)
54
+
valres5: scala.collection.immutable.Vector[Int] =
55
+
Vector(4, 6, 8, 10, 12, 14, 16, 18, 20, 22)
56
+
```
57
+
58
+
{% endtab %}
59
+
{% tab 'Scala 3' for=views_2 %}
60
+
61
+
```scala
62
+
scala>valv=Vector((1 to 10)*)
63
+
valv: scala.collection.immutable.Vector[Int] =
64
+
Vector(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
65
+
66
+
scala> v.map(_ +1).map(_ *2)
67
+
valres5: scala.collection.immutable.Vector[Int] =
68
+
Vector(4, 6, 8, 10, 12, 14, 16, 18, 20, 22)
69
+
```
70
+
71
+
{% endtab %}
72
+
{% endtabs %}
39
73
40
74
In the last statement, the expression `v map (_ + 1)` constructs a new vector which is then transformed into a third vector by the second call to `map (_ * 2)`. In many situations, constructing the intermediate result from the first call to map is a bit wasteful. In the example above, it would be faster to do a single map with the composition of the two functions `(_ + 1)` and `(_ * 2)`. If you have the two functions available in the same place you can do this by hand. But quite often, successive transformations of a data structure are done in different program modules. Fusing those transformations would then undermine modularity. A more general way to avoid the intermediate results is by turning the vector first into a view, then applying all transformations to the view, and finally forcing the view to a vector:
The result of the `map` is another `IndexedSeqView[Int]` value. This is in essence a wrapper that *records* the fact that a `map` with function `(_ + 1)` needs to be applied on the vector `v`. It does not apply that map until the view is forced, however. Let's now apply the second `map` to the last result.
Both stored functions get applied as part of the execution of the `to` operation and a new vector is constructed. That way, no intermediate data structure is needed.
71
144
@@ -84,33 +157,87 @@ These operations are documented as “always forcing the collection elements”.
84
157
85
158
The main reason for using views is performance. You have seen that by switching a collection to a view the construction of intermediate results can be avoided. These savings can be quite important. As another example, consider the problem of finding the first palindrome in a list of words. A palindrome is a word which reads backwards the same as forwards. Here are the necessary definitions:
86
159
87
-
def isPalindrome(x: String) = x == x.reverse
88
-
def findPalindrome(s: Seq[String]) = s find isPalindrome
Now, assume you have a very long sequence words, and you want to find a palindrome in the first million words of that sequence. Can you re-use the definition of `findPalindrome`? Of course, you could write:
This nicely separates the two aspects of taking the first million words of a sequence and finding a palindrome in it. But the downside is that it always constructs an intermediary sequence consisting of one million words, even if the first word of that sequence is already a palindrome. So potentially, 999'999 words are copied into the intermediary result without being inspected at all afterwards. Many programmers would give up here and write their own specialized version of finding palindromes in some given prefix of an argument sequence. But with views, you don't have to. Simply write:
This has the same nice separation of concerns, but instead of a sequence of a million elements it will only construct a single lightweight view object. This way, you do not need to choose between performance and modularity.
99
192
100
193
After having seen all these nifty uses of views you might wonder why have strict collections at all? One reason is that performance comparisons do not always favor lazy over strict collections. For smaller collection sizes the added overhead of forming and applying closures in views is often greater than the gain from avoiding the intermediary data structures. A probably more important reason is that evaluation in views can be very confusing if the delayed operations have side effects.
101
194
102
195
Here's an example which bit a few users of versions of Scala before 2.8. In these versions the `Range` type was lazy, so it behaved in effect like a view. People were trying to create a number of actors like this:
103
196
104
-
val actors = for (i <- 1 to 10) yield actor { ... }
197
+
{% tabs views_11 class=tabs-scala-version %}
198
+
{% tab 'Scala 2' for=views_11 %}
199
+
```scala
200
+
valactors=for (i <-1 to 10) yield actor { ... }
201
+
```
202
+
{% endtab %}
203
+
{% tab 'Scala 3' for=views_11 %}
204
+
```scala
205
+
valactors=for i <-1 to 10yield actor { ... }
206
+
```
207
+
{% endtab %}
208
+
{% endtabs %}
105
209
106
210
They were surprised that none of the actors was executing afterwards, even though the actor method should create and start an actor from the code that's enclosed in the braces following it. To explain why nothing happened, remember that the for expression above is equivalent to an application of map:
107
211
108
-
val actors = (1 to 10) map (i => actor { ... })
212
+
{% tabs views_12 %}
213
+
{% tab 'Scala 2 and 3' for=views_12 %}
214
+
215
+
```scala
216
+
valactors= (1 to 10).map(i => actor { ... })
217
+
```
218
+
219
+
{% endtab %}
220
+
{% endtabs %}
109
221
110
222
Since previously the range produced by `(1 to 10)` behaved like a view, the result of the map was again a view. That is, no element was computed, and, consequently, no actor was created! Actors would have been created by forcing the range of the whole expression, but it's far from obvious that this is what was required to make the actors do their work.
111
223
112
224
To avoid surprises like this, the current Scala collections library has more regular rules. All collections except lazy lists and views are strict. The only way to go from a strict to a lazy collection is via the `view` method. The only way to go back is via `to`. So the `actors` definition above would now behave as expected in that it would create and start 10 actors. To get back the surprising previous behavior, you'd have to add an explicit `view` method call:
113
225
114
-
val actors = for (i <- (1 to 10).view) yield actor { ... }
226
+
{% tabs views_13 class=tabs-scala-version %}
227
+
{% tab 'Scala 2' for=views_13 %}
228
+
229
+
```scala
230
+
valactors=for (i <- (1 to 10).view) yield actor { ... }
231
+
```
232
+
233
+
{% endtab %}
234
+
{% tab 'Scala 3' for=views_13 %}
235
+
236
+
```scala
237
+
valactors=for i <- (1 to 10).view yield actor { ... }
238
+
```
239
+
240
+
{% endtab %}
241
+
{% endtabs %}
115
242
116
243
In summary, views are a powerful tool to reconcile concerns of efficiency with concerns of modularity. But in order not to be entangled in aspects of delayed evaluation, you should restrict views to purely functional code where collection transformations do not have side effects. What's best avoided is a mixture of views and operations that create new collections while also having side effects.
0 commit comments