Swift's high-level semantics try to relieve programmers from thinking about memory management in typical application code. In situations where predictable performance and runtime behavior are needed, though, the variability of ARC and Swift's optimizer have proven difficult for performance-oriented programmers to work with. The Swift Performance team at Apple are working on a series of language changes and features that will make the ARC model easier to understand, while also expanding the breadth of manual control available to the programmer. Many of these features are based on concepts John McCall had previously sketched out in the Ownership Manifesto ([Manifesto] Ownership), and indeed, the implementation of these features will also provide a technical foundation for move-only types and the other keystone ideas from that manifesto. We will be posting pitches for the features described in this document over the next few months.
We want these features to fit within the "progressive disclosure" ethos of Swift. These features should not be something you need to use if you're writing everyday Swift code without performance constraints, and similarly, if you're reading Swift code, you should be able to understand the non-ARC-centric meaning of code that uses these features by ignoring the features for the most part. Conversely, for programmers who are tuning the performance of their code, we want to provide a predictable model that is straightforward to understand.
Lexical lifetimes
Our first step does not really change the surface language at all, but makes the baseline behavior of memory management in Swift more predictable and stable, by anchoring lifetimes of local variables to their lexical scope. Going back to Objective-C ARC, we tried to avoid promising anything about the exact lifetimes of bindings, saying that code should not generally rely on releases and deallocations happening at any specific point in time between the final use of the variable and the variable's end of scope. We wanted to have our cake and eat it too, allowing debug builds to compile straightforwardly, with good debugger behavior, while also retaining the flexibility for optimized builds to reduce memory usage and minimize ARC traffic by shortening lifetimes to the time of use.
However, in practice, this was a difficult model for developers to work with: different behavior in debug vs. release builds can lead to subtle, easy-to-miss bugs sneaking through testing. Many common patterns were also technically invalid by the old ARC language rules. For example, if you use a weak reference to avoid reference cycles between a controller and delegate, like this:
class Controller {
weak var delegate: MyDelegate?
func callDelegate() {
_ = delegate!
}
}
let delegate = MyDelegate(controller)
MyController(delegate).callDelegate()
then the delegate
variable’s lifetime would be in jeopardy as soon as it’s done being passed to MyController
’s initializer, since that is the last use of the variable. Because the delegate
variable was the only strong reference to the MyDelegate
object, that object gets deallocated immediately, causing the weak reference in the controller to be niled out even before the expression has finished being evaluated. Every time the optimizer has improved and discovered new opportunities to shorten object lifetimes, we've inevitably broken working code that looks similar to this.
The wishy-washy "we can release at any time" rule also made certain constructs dangerous, such as anything that interacts with C's errno
construct. If a variable can be released at any point, releasing the variable may trigger deinitializers, and that deinitializer can free(3)
, that free
can clobber errno
. Even if it is unlikely that the optimizer would choose to shorten the lifetime of an object exactly to the time between the user making a C library call and checking errno
for the error state afterward, the existence of the possibility makes the programming model more hazardous than it needs to be.
For these reasons, we think it makes sense to change the the language rules to follow what is most users' intuition, while still giving us the flexibility to optimize in important cases. Rather than say that releases on variables can happen literally anywhere, we will say that releases are anchored to the end of the variable's scope, and that operations such as accessing a weak reference, using pointers, or calling into external functions, act as deinitialization barriers that limit the optimizer's ability to shorten variable lifetimes. The upcoming proposal will go into more detail about what exactly anchoring means, and what constitutes a barrier, but in our experiments, this model provides much more predictable behavior and greatly reduces the need for things like withExtendedLifetime
in common usage patterns, without sacrificing much performance in optimized builds. The model remains less strict than C++'s strict scoping model, since it still allows for reordering of releases that go out of scope at the same time, but we haven't seen order dependencies among deinit
s as a major problem in practice. The optimizer in this model can still also shorten of variable lifetimes when there aren't deinitialization barriers, and such code is unlikely to observe the effects of deinitialization.
move
function for explicit ownership transfer
Pitch: [Pitch] Move Function + "Use After Move" Diagnostic
Having set a predictable baseline for memory management behavior, we can provide tools to give users additional control where they need it. Shortening variable lifetimes is still an important optimization for reducing ARC traffic, and for maintaining uniqueness of copy-on-write data structures when building up values. For instance, we may want to build up an array, use that array as part of a larger struct, and then be able to update the struct efficiently by maintaining the array's uniqueness. If we write this:
struct SortedArray {
var values: [String]
init(values: [String]) {
self.values = values
// Ensure the values are actually sorted
self.values.sort()
}
}
then under the lexical lifetimes rule, the call to self.values.sort()
can potentially trigger a copy-on-write, since the underlying array is still referenced by the values
argument passed into the initializer. The optimizer could copy-forward values
into the newly-initialized struct, since it is the last use of values
in its scope, but this isn’t guaranteed. We want to explicitly guarantee that copy forwarding happens here, since it affects the time complexity of making updates to the aggregate. For this purpose, we will add the move
function, which explicitly transfers ownership of a value in a local variable at its last use:
struct SortedArray {
var values: [String]
init(values: [String]) {
// Ensure that, if `values` is uniquely referenced, it remains so,
// by moving it into `self`
self.values = move(values)
// Ensure the values are actually sorted
self.values.sort()
}
}
By making the transfer of ownership explicit with move
, we can guarantee that the lifetime of the values
argument is ended at the point we expect. If its lifetime can't be ended at that point, because there are more uses of the variable later on in its scope, or because it's not a local variable, then the compiler can raise errors explaining why. Since values
is no longer active, self.values
is the only reference remaining in this scope, and the sort
method won't trigger an unnecessary copy-on-write.
Managing ownership transfer across calls with argument modifiers
Another currently-underspecified part of Swift's ownership model is how the language transfers ownership of values across calls. When passing an argument to a function, the caller can either let the function borrow the argument value, letting the callee assume temporary ownership of the existing value for the duration of the call and taking ownership back when the callee returns, or it can let the callee consume the argument, relinquishing the caller's own ownership of the argument, and giving the callee the responsibility to either release the value when it's done with it, or transfer ownership somewhere else. inout
arguments must borrow their argument, because they perform in-place mutation, but Swift today does not otherwise specify which convention it uses for regular arguments. In practice, it follows some heuristic rules:
- Most regular function arguments are borrowed.
- Arguments to
init
are consumed, as is thenewValue
passed to aset
operation.
The motivation for these rules is that initializers and setters are more likely to use their arguments to construct a value, or modify an existing value, so we want to allow initializers and setters to move their arguments into the result value without additional copies, retains, or releases. These rules are a good starting point, but we may want to override the default argument conventions to minimize ARC and copies. For instance, the append
method on Array
would also benefit from consuming its argument so that the new values can be forwarded into the data structure, and so would any other similar method that inserts a value into an existing data structure. We can add a new argument modifier to put the consuming
convention in developer control:
extension Array {
mutating func append(_ value: consuming Element) { ... }
}
On the other hand, an initializer may take arguments that serve only as options or other incidental input to the initialization process, without actually being used as part of the newly-initialized value, so the default consuming convention for initializers imposes an unnecessary copy in the call sequence, since the caller must perform an extra retain or copy to balance the consumption in the callee. So we can also put the nonconsuming
convention in developer control:
struct Foo {
var bars: [Bar]
// `name` is only used for logging, so making it `nonconsuming`
// saves a retain on the caller side
init(bars: [Bar], name: nonconsuming String) {
print("creating Foo with name \(name)")
self.bars = move(bars)
}
}
(These modifiers already exist in the compiler, spelled __owned
and __shared
, though we think those names are somewhat misleading in their current form.)
read
and modify
accessor coroutines for in-place borrowing and mutation of data structures
Swift provides computed properties and subscripts to allow types to abstract over their physical representation, defining properties and data structures in terms of user-defined "get" and "set" functions. However, there is a fundamental cost to the get/set abstraction; under the sugar, getters and setters are plain old functions. The getter has to provide the accessed value as a return value, and returning a value requires copying it. The setter then has to take the new value as an argument, and as previously discussed, that argument is callee-consumed by default. If we're performing what looks like an in-place modification on a computed property, that involves calling the getter, applying the modification to the result of the getter, and then calling the setter. Even if we define a computed property that attempts to transparently reveal an underlying private stored property:
struct Foo {
private var _x: [Int]
var x: [Int] {
get { return _x }
set { _x = newValue }
}
}
we're adding overhead by accessing through that computed property, since:
foo.x.append(1738)
evaluates to:
var foo_x = foo.get_x()
foo_x.append(1738)
foo.set_x(foo_x)
For copy-on-write types like Array, this is particularly undesirable, since the temporary copy returned by the getter forces the array contents to always be copied when the value is modified.
We would really like computed properties and subscripts to be able to yield access to part of the value, allowing the code accessing the property to work on that value in-place. Our internal solution for this in the standard library is to use single-yield coroutines as alternatives to get/set functions, called read
and modify
:
struct Foo {
private var _x: [Int]
var x: [Int] {
read { yield _x }
modify { yield &_x }
}
}
A normal function stops executing once it's returned, so normal function return values must have independent ownership from their arguments; a coroutine, on the other hand, keeps executing, and keeps its arguments alive, after yielding its result until the coroutine is resumed to completion. This allows for coroutines to provide access to their yielded values in-place without additional copies, so types can use them to implement custom logic for properties and subscripts without giving up the in-place mutation abilities of stored properties. These accessors are already implemented in the compiler under the internal names _read
and _modify
, and the standard library has experimented extensively with these features and found them very useful, allowing the standard collection types like Array
, Dictionary
, and Set
to implement subscript operations that allow for efficient in-place mutation of their underlying data structures, without triggering unnecessary copy-on-write overhead when data structures are nested within one another.
Requiring explicit copies on variables
The features described so far greatly increase a Swift programmer's ability to control the flow of ownership at the end of value lifetimes, across functions, and through property and subscript accesses. In the middle, though, Swift is still normally free to introduce copies on values as necessary in the execution of a function. The compiler should be able to help developers optimize code, and keep their code optimized in the face of future change, by allowing implicit copying behavior to be selectively disabled, and offering an explicit copy
operation to mark copies where they're needed:
class C {}
func borrowTwice(first: C, second: C) {}
func consumeTwice(first: consuming C, second: consuming C) {}
func borrowAndModify(first: C, second: inout C) {}
func foo(x: @noImplicitCopy C) {
// This is fine. We can borrow the same value to use it as a
// nonconsuming argument multiple times.
borrowTwice(first: x, second: x)
// This would normally require copying x twice, because
// `consumeTwice` wants to consume both of its arguments, and
// we want x to remain alive for use here too.
// @noImplicitCopy would flag both of these call sites as needing
// `copy`.
consumeTwice(first: x, second: x) // error: copies x, which is marked noImplicitCopy
consumeTwice(first: copy(x), second: copy(x)) // OK
// This would also normally require copying x once, because
// modifying x in-place requires exclusive access to x, so
// the `first` immutable argument would receive a copy instead
// of a borrow to avoid breaking exclusivity.
borrowAndModify(first: copy(x), second: &x)
// Here, we can `move` the second argument, since it is the final
// use of `x`
consumeTwice(first: copy(x), second: move(x))
}
For a programmer looking to minimize the excess copies and ARC traffic in their code, making copies explicit like this is essential feedback to help them adjust their code, changing argument conventions, adopting accessor coroutines, and making other copy-avoiding changes.
Generalized nonescaping arguments
We can selectively prevent implicit copies on a borrowed function argument, as laid out above, but we can also selectively prevent explicit copies as well. If we do that, then the argument is effectively non-escaping, meaning the callee cannot copy and store the value anywhere it can be kept alive beyond the duration of the call. We already have this concept for closures—closure arguments are nonescaping by default, and must be marked @escaping
to be used beyond the duration of their call. Making closures nonescaping has both performance and correctness benefits; a nonescaping closure can be allocated on the stack and never needs to be retained or released, instead of being allocated on the heap and reference-counted like an escaping closure. Nonescaping closures can also safely capture and modify inout
arguments from their enclosing scope, because they are guaranteed to be executed only for the duration of their call, if at all.
Non-closure types could benefit from these performance and safety properties as well. Swift’s optimizer can already stack-promote classes, arrays, and dictionaries in limited circumstances, but its power is limited by the fact that function calls must generally be assumed to escape their arguments. Being able to mark arbitrary arguments as @nonescaping
could make the optimizer more powerful:
func foo(x: Int, y: Int, z: Int) {
// We would like to stack-allocate this array:
let xyz = [x, y, z]
// but this call makes it look like it might escape.
print(xyz)
}
// However, if we mark print's argument as nonescaping, then we can still stack
// allocate xyz.
func print(_ args: @nonescaping Any...) { }
Furthermore, there are many APIs, particularly low-level ones such as withUnsafePointer
, that invoke body closures with arguments that must not be escaped, and the language currently relies on the programmer to use them correctly. Being able to make the arguments to these functions nonescaping would allow the compiler to enforce that they are used safely. For instance, if withUnsafePointer
declares its body closure as taking a nonescaping argument, the compiler can then enforce that the pointer is not misused:
func withUnsafePointer<T, R>(to: T, _ body: (@nonescaping UnsafePointer<T>) -> R) -> R
let x = 42
var xp: UnsafePointer<Int>? = nil
withUnsafePointer(to: x) { p in
xp = p // error! can't escape p
}
Borrow variables
When working with deep object graphs, it’s natural to want to assign a local variable to a property deeply nested within the graph:
let greatAunt = mother.father.sister
greatAunt.sayHello()
greatAunt.sayGoodbye()
However, with shared mutable objects, such as global variables and class instances, these local variable bindings necessitate a copy of the value out of the object; either mother
or mother.father
above could be mutated by code anywhere else in the program referencing the same objects, so the value of mother.father.sister
must be copied to the local variable greatAunt
to preserve it independently of changes to the object graph from outside the local function. Even with value types, other mutations in the same scope would force the variable to copy to preserve the value at the time of binding. We may want to prevent such mutations while the binding is active, in order to be able to share the value in-place inside the object graph for the lifetime of the variable referencing it. We can do this by introducing a new kind of local variable binding that binds to the value in place without copying, while asserting a borrow over the objects necessary to access the value in place:
// `ref` comes from C#, as a strawman starting point for syntax.
// (It admittedly isn't a perfect name, since unlike C#'s ref, this would
// actively prevent mutation of stuff borrowed to form the reference, and
// if the right hand side involves computed properties and such, it may not
// technically be a reference)
ref greatAunt = mother.father.sister
greatAunt.sayHello()
mother.father.sister = otherGreatAunt // error, can't mutate `mother.father.sister` while `greatAunt` borrows it
greatAunt.sayGoodbye()
We have a similar problem with unavoidable copies when passing properties of a class instance as arguments to a function. Because the callee might modify the shared state of the object graph, we normally must copy the argument value. Using a borrow variable, to explicitly borrow the value in place, gives us a way to eliminate this copy:
print(mother.father.sister) // copies mother.father.sister
ref greatAunt = mother.father.sister
print(greatAunt) // doesn't copy, since it's already borrowed
It’s also highly desirable to be able to perform multiple mutations on part of an object graph in a single access. If we write something like:
mother.father.sister.name = "Grace"
mother.father.sister.age = 115
then not only is that repetitive, but it’s also inefficient, since the get
/set
, or read
/modify
, sequence to access mother
, then mother.father
, then mother.father.sister
must be repeated twice, in case there were any intervening mutations of the shared state between operations. As above, we really want to make a local variable, that asserts exclusive access to the value being modified for the scope of the variable, allowing us to mutate it in-place without repeating the access sequence to get to it:
inout greatAunt = &mother.father.sister
greatAunt.name = "Grace"
mother.father.sister = otherGreatAunt // error, can't access `mother.father.sister` while exclusively borrowed by `greatAunt`
greatAunt.age = 115
There are other places where inout
bindings for in-place mutation are desirable, but aren’t currently available, and we can extend inout
bindings to be available in those places as well. For instance, when switch
-ing an enum
, we would like to be able to bind to its payload, and update it in place:
enum ZeroOneOrMany<T> {
case zero
case one(T)
case many([T])
mutating func append(_ value: consuming T) {
switch &self {
case .zero:
self = .one(move(value))
case .one(let oldValue):
self = .many([move(oldValue), move(value)])
case .many(inout oldValues):
oldValues.append(move(value))
}
}
}
Looking forward to move-only types
No-implicit-copy and nonescaping variables are effectively “move-only variables”, since they ask the compiler to force a variable to be used only in ways that don’t require ARC to insert copies. The consuming
and nonconsuming
modifiers on arguments, read
and modify
accessor coroutines, and ref
and inout
variables allow for local variables to make non-mutating and mutating references into data structures and object graphs without copying. All together, these features clear the way to support move-only types, types for which every value is non-copyable. As discussed in the Ownership Manifesto, move-only types open the way to represent uniquely-owned resources without the overhead of ARC, and which cannot safely have multiple copies of themselves existing, particularly low-level concurrency primitives like atomic variables and locks. Fully designing and implementing move-only types involves broad changes to the generics model, and retrofits to the standard library to support them, so we won’t include them in this roadmap. However, many of these features, and the implementation work behind them, set the stage for implementing them in the future.
Building safer performance-oriented APIs with these features
Even without the full expressivity of move-only types, there are new APIs we can add that allow for working with memory safely with lower overhead than our existing safe APIs. Types that have no public initializers, but which are only made available to client code via @nonescaping
arguments, have a useful subset of the functionality of a move-only type—they can’t be copied, so they can be used to represent scoped references to resources. Full move-only types would also allow for ownership transfer between scopes and different values, and for generic abstraction over move-only types. But even without those abilities, we can create useful APIs. For instance, we can create a safe type for referring to contiguous memory regions, as efficient and flexible as UnsafeBufferPointer
in being able to refer to any contiguous memory region, but without being any less safe than ArraySlice
. We could call this type BufferView
, and give it collection-like APIs to index elements, or slice out subviews:
struct BufferView<Element> {
// no public initializers
subscript(i: Int) -> Element { read modify }
subscript<Range: RangeExpression>(range: Range) -> BufferView<Element> {
@nonescaping read
@nonescaping modify
}
var count: Int { get }
}
Contiguous collections like Array
can provide new subscript operators, allowing access to part of their in-place contents via a BufferView
:
extension Array {
subscript<Range: RangeExpression>(bufferView range: Range) -> BufferView<Element> {
@nonescaping read
@nonescaping modify
}
}
Note that these APIs use the @nonescaping
modifier on read
and modify
coroutines, indicating that when client code accesses the BufferView
, it cannot copy or otherwise prolong the lifetime of the view outside of the duration of the accessor coroutine.
var lastSummedBuffer: BufferView<Int>?
func sum(buffer: @nonescaping BufferView<Int>) -> Int {
var value = 0
// error! can't escape `buffer` out of its scope
lastSummedBuffer = buffer
// Move-only types would let us make BufferView conform to Sequence.
// Until then, we can loop over its indices…
for i in 0...buffer.count {
value += buffer[i]
}
}
let primes = [2, 3, 5, 7, 11, 13, 17]
// We can pass the BufferView of the array, without any reference counting,
// and without the array looking like it escapes, making it more likely the
// constant array above gets stack-allocated, or optimized into a global
// static array
let total = sum(buffer: primes[bufferView: ...])
The nonescaping constraint allows BufferView
to be safe, while having overhead on par with UnsafeBufferPointer
, and @nonescaping
coroutines that produce BufferView
s provide a more expressive alternative to the withUnsafe { }
closure-based pattern used in Swift’s standard library today. Multiple BufferViews from multiple data structures can be worked with in the same scope without a “pyramid of doom” of closure literals. When these features become official parts of the language, then user code can adopt this pattern as well, replacing something like:
extension ResourceHolder {
func withScopedResource<R>(_ body: (ScopedResource) throws -> R) rethrows -> R {
let scopedResource = setUpScopedResource()
defer { tearDownScopedResource(scopedResource) }
try body(scopedResource)
}
}
with:
extension ResourceHolder {
var scopedResource: ScopedResource {
@nonescaping read {
let scopedResource = setUpScopedResource()
defer { tearDownScopedResource(scopedResource) }
yield scopedResource
}
}
}
Taken together, these features will greatly improve the ability for Swift programmers to write low-level code safely and efficiently and control the ARC behavior of higher-level code, while providing the technical basis for full move-only types in the future.
from Hacker News https://ift.tt/30MjcL0
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.