Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2 API Proposal Document #339

Merged
merged 12 commits into from
Oct 9, 2024
Merged

Conversation

aryan-25
Copy link
Contributor

@aryan-25 aryan-25 commented Aug 8, 2024

Hello 👋

I am Aryan, a Computer Science student and currently an intern at Apple on the Swift on Server team. As part of my internship, I have been working with @fabianfett to propose a new v2 API for the swift-aws-lambda-runtime library.

swift-aws-lambda-runtime is an important library for the Swift on Server ecosystem. The initial API was written before async/await was introduced to Swift. When async/await was introduced, shims were added to bridge between the underlying SwiftNIO EventLoop interfaces and async/await.

However, just like gRPC-swift and postgres-nio, we now want to shift to solely using async/await constructs instead of EventLoop interfaces. For this, large parts of the current API have to be reconsidered. This also provides a good opportunity to add support for new AWS Lambda features such as response streaming.

We have written a document that explains the current limitations of the library and proposes a detailed design for the v2 API.

Please read the proposal and voice your opinions either here or on the forum post. We are looking forward to your feedback!

cc: @sebsto @tachyonics @FranzBusch @Lukasa @tomerd @weissi @adam-fowler @ktoso @0xTim

@sebsto
Copy link
Contributor

sebsto commented Aug 8, 2024

Thank you @aryan-25 ! I'll have a look this morning

@sebsto
Copy link
Contributor

sebsto commented Aug 8, 2024

This is not related to this doc, but should we move the Documentation.docc directory just underneath Sources ?
@fabianfett wdyt ?

Current struct :

Sources/AWSLambdaRuntimeCore
└── Documentation.docc
    ├── Resources
    │   ├── code
    │   └── tutorials
    └── tutorials

Proposed struct

Sources/Documentation.docc
├── Resources
│   ├── code
│   └── tutorials
└── tutorials

Copy link
Member

@0xTim 0xTim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great steps forward!


`LambdaContext` will be largely unchanged, but the `eventLoop` property as well as the `allocator` property (of type `ByteBuffer`) will be removed.

A new function `backgroundTask()` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A new function `backgroundTask()` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore.
A new function `addBackgroundTask(_:)` will also be added. This will allow tasks to be run in the background while and after the response is/has been sent. Please note that `LambdaContext` will not be Sendable anymore.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reasoning behind making the context non-sendable? Is it for the background task stuff? It should be fine given the scope of sharing it will be pretty narrow but it could cause issues with strict concurrency for those that aren't aware

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to allow adding background tasks in a structured way, we will need to back the LambdaContext with a TaskGroup. Since TaskGroup is not Sendable we can't mark LambdaContext as Sendable. However if users need any property (all of them are Sendable) out of the LambdaContext they can just get that property out of it and then pass it around or close over it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does the context need to hold the TaskGroup? I smell structured concurrency violations :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LambdaContext is now marked as Sendable like before. We have got rid of the addBackgroundTask(_:) function in the revised proposal.


### StreamingLambdaHandler

The new `StreamingLambdaHandler` protocol is the base protocol to implement a Lambda function. Most users will not use this protocol and instead use the `LambdaHandler` protocol defined in the `Codable` Support section.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for clarity it might be nice to have a section talking about the LambdaHandler - I spent a bit of time trying to find the detailed explanation of it before realising it's in the top section


### LambdaRuntime

`LambdaRuntime` is the class that communicates with the Lambda control plane as defined in [Building a custom runtime for AWS Lambda](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html) and forward the invocations to the provided `StreamingLambdaHandler`. It will conform to `ServiceLifecycle.Service` to provide support for `swift-service-lifecycle`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this work with LambdaHandler as well, or does that conform to StreamingLambdaHandler to ferry calls through?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to use a LambdaHandler users will wrap it in a LambdaCodableAdapter. LambdaCodableAdapter conforms to StreamingLambdaHandler. So the LambdaRuntime type that you will use when using LambdaHandler will be:

let runtime: LambdaRuntime<LambdaCodableAdapter<MyLambdaHandler, Event, Output, JSONDecoder, JSONEncoder>>

but users basically never have to use that type explicitly. So to answer your question: Yes it works with LambdaHandler as well, but via the LambdaCodableAdapter.


We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa.

We welcome the discussion on this topic and are open to change our minds and API here.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be interesting to see if anyone has any use cases that integrate something like Swift Middleware

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a fair point and I would love to see how middleware would work with this writer approach. IMO middleware do make some sense for Lambda

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If using Middleware with a request and context, do you think the response writer would be placed in the context or be a separate parameter, thus meaning a departure from the current swift-middleware setup

Copy link
Member

@adam-fowler adam-fowler Aug 8, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding whether you provide a response writer, versus returning a response that includes a writer I not sure. I have gone back and forward on this when building HB. In most cases I have not seen any need for a response writer but there are still situations where they makes things clearer. eg Tracing (You want to finished your request span once the response has been written, not when you have returned a response which includes the closure that'll write the response).

In the end you can implement returning a response on top of an API what uses a response writer but not the reverse so for flexibility it should probably be a response writer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this point above, middleware definitely does make sense for lambda (for example tracing should just be a middleware that can be used anywhere) and should just be integrated IMO. My only quasi-concern with the passed-in-writer-approach is that for it to work with middleware, the middleware protocol has to suppress Copyable conformance on one of its associated types - its writer type, something that isn't going to be supported in Swift 6.0.

That said I do think the argument that this is a lower-level and more flexible API makes sense.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to integrate the swift-middleware library directly. I think if users have those needs they should consider using hummingbird/smoke/vapor in lambda mode, which then brings the option to use middleware.

This shall not mean that writing a middleware can be impossible. We should allow users to write middleware that they can stack.

Also I think swift-middleware is currently not targeted for anytime soon. So in order to make progress we should not depend on it. Integrating swift-middleware (as is) on top of this proposed API is absolutely possible and therefore should not be considered a blocker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can debate whether we think the swift-middleware library should be integrated directly but I do agree it shouldn't be considered a blocker for this work/release.

@fabianfett fabianfett mentioned this pull request Aug 8, 2024
/// the runtime can ask for further events. Note that AWS will continue to
/// charge you after the response has been returned, but work is still
/// processing in the background.
public func addBackgroundTask(_ body: sending @escaping () async -> ())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't feel necessary to me with the current API proposal. Since we pass in an independent writer. A user can just write some bytes, the call writer then finish and then do any background work before returning from the handle method. They can even set that up in child tasks in the handle method to start the background work while handling the actual request.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a great call. Yes if we keep the API that passes in a LambdaResponseWriter we can put the user in charge!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. This is unstructured concurrency and given that the user controls finish, they know when the response is done.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weissi this would not be unstructured, as the underlying runtime would inject a taskGroup into the context and that taskGroup would be used to schedule the child task here. The underlying runtime would then ensure that all tasks have completed, before it would ask for more work. This approach would be necessary, if we used an approach in which the user returns a LambdaResponse:

protocol LambdaStreamingResponse {
    func handle(_ event: ByteBuffer, context: LambdaContext) async throws -> LambdaResponse
}

However this is needed for the LambdaHandler API:

public protocol CodableLambdaHandler: ~Copyable {
    associatedtype Event
    associatedtype Output

    // only way to schedule background work that can continue after returning the Output here
    // is by having the addBackgroundTask on `LambdaContext` here.
    func handle(_ event: Event, context: LambdaContext) async throws -> Output
}

@FranzBusch given this, do we want to have two different LambdaContexts? Or are we fine with keeping the LambdaContext.addBackgroundTask even in the situation where it isn't really needed? I think I would opt for keeping it (so that we have it for the 99% use-case)

Copy link
Contributor

@weissi weissi Aug 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabianfett I'm pretty sure that it's is unstructured, it just manages to now use a Task { to achieve it. It passes the background work into a reference which will then be owned by something further up the stack.

The test is easy, if you write this code

do { // creating a useless piece of structure (so we can observe Structured Concurrency)
    thing.addBackgroundWork { print("AAAA") }
} // the structure ends here
print("BBBB")

Is there any way in which AAAA could print after BBBB? If so: unstructured.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with splitting it across two methods is if the post work needs context from the initial work.

but we can pass the context, no?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weissi what if you want to create a task that starts immediately but is allowed to run longer... I totally get the benefits. I think we need something that is better than Task {} here. And I think addBackgroundTask {} is significantly better here, as it will guarantee that you get runtime until your background task finishes. We could not guarantee this for Task {}. Also two methods on the interface won't work with the closure API.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabianfett Not sure I understand what "a task that starts immediately but is allowed to run longer" is. Could you provide some example code?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An API call that

  1. does a short computation which is returned to you immediately
  2. spawns a longer computation, which's result is send to you via mail once it finishes (can take up to 15min)

Both should start as soon as the API is invoked. We can potentially question the API design here. But this is possible with Lambda and we should allow users to do this, if they opt to.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably the most consistent approach for @fabianfett's use case is to provide a wrapping CodableResponseWriter (potentially with just a writeAndFinish API) while allowing work to continue past the result being returned to the control plane. It is not as simple as returning the result directly. We could provide both forms but that does add its own complexity to the library overall.

public var invokedFunctionARN: String { get }

/// The timestamp that the function times out.
public var deadline: DispatchWallTime { get }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use ContinuousClock.Instant here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope this is epoch time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per Lambda runtime API

Lambda-Runtime-Deadline-Ms – The date that the function times out in Unix time milliseconds. For example, 1542409706888.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got you so we would need a UTCClock.Instant here

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Sadly we don't have that yet. Only way out would be if Lambda built its own Clock. But I don't think we want to do that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe you should create a LambdaUTCTimeClock type that conforms to all the new hotness but implements the clock. With the hope that one day you can deprecate it and to typealias LambdaUTCTimeClock = UTClock with a bit of luck.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weissi would you build the sleeping on top of dispatch?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@weissi would you build the sleeping on top of dispatch?

No.

I'd just create a custom Clock type (because the language lacks UTCClock for some reason). Once you have your own clock (which you could for example back by clock_gettime and struct timespec but hidden as implementation detail).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you implement the required sleeping methods though?
https://developer.apple.com/documentation/swift/clock/sleep(for:tolerance:)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can just use the normal Task.sleep on the non-suspending clock for that. UTC time and the non-suspending clock run at the same speed

enum Lambda {
/// This returns the default EventLoop that a LambdaRuntime is scheduled on.
/// It uses `NIOSingletons.posixEventLoopGroup.next()` under the hood.
public static var defaultEventLoop: any EventLoop { get }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like noted I don't see why we would need to create a separate static var for this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

answered above.


We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa.

We welcome the discussion on this topic and are open to change our minds and API here.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a fair point and I would love to see how middleware would work with this writer approach. IMO middleware do make some sense for Lambda

Comment on lines +465 to +467
handler: Handler,
encoder: Encoder,
decoder: Decoder
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make more sense for handler to be the final argument so it could be a trailing closure if the user wants?

Is there an end-to-end example somewhere of how this looks from main() for the case where custom coders need to be used?


We decided to implement the approach in which a `LambdaResponseWriter` is passed to the function, since the approach in which a `LambdaResponse` is returned can trivially be built on top of it. This is not true vice versa.

We welcome the discussion on this topic and are open to change our minds and API here.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this point above, middleware definitely does make sense for lambda (for example tracing should just be a middleware that can be used anywhere) and should just be integrated IMO. My only quasi-concern with the passed-in-writer-approach is that for it to work with middleware, the middleware protocol has to suppress Copyable conformance on one of its associated types - its writer type, something that isn't going to be supported in Swift 6.0.

That said I do think the argument that this is a lower-level and more flexible API makes sense.

Comment on lines +545 to +549
let runtime = LambdaRuntime { (event: Input, context: LambdaContext) in
Greeting(echoedMessage: event.message)
}

try await runtime.run()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For this, the simplest use case, would it make sense to provide a static main() on LambdaRuntime that just calls its run method so users can use @main if this is all they need to do?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I can follow here. How would that code look like?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it look like-

@main
LambdaRuntime { (event: Input, context: LambdaContext) in
    Greeting(echoedMessage: event.message)
}


The current API extensively uses the `EventLoop` family of interfaces from SwiftNIO in many areas. To use these
interfaces correctly though, it requires developers to exercise great care and be aware of certain details such as never
running blocking code on the same `EventLoop` the library uses. Developers also need to understand the various transform
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same applies to async/await. I'd scratch this

/// the runtime can ask for further events. Note that AWS will continue to
/// charge you after the response has been returned, but work is still
/// processing in the background.
public func addBackgroundTask(_ body: sending @escaping () async -> ())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree. This is unstructured concurrency and given that the user controls finish, they know when the response is done.

/// - Parameter logger: A logger
public init(
handler: consuming sending Handler,
eventLoop: EventLoop = Lambda.defaultEventLoop,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will we still have the ability to run everything in just one thread (the main thread)? I think for many lambdas you'll never want to fork off a second thread at all. NIO thread & async/await should just run in the main thread (but as global default executor not as @MainActor)

@Joannis Joannis self-requested a review August 13, 2024 12:04
@Joannis
Copy link
Member

Joannis commented Aug 13, 2024

I don't have further input other than what was already mentioned. Looks like a solid update to me, with some small considerations here and there.

- Remove the `reportError(_:)` method from `LambdaResponseStreamWriter` and instead make the `handle(...)` method of `StreamingLambdaHandler` throwing.
- Remove the `addBackgroundTask(_:)` method from `LambdaContext` due to structured concurrency concerns and introduce the `LambdaWithBackgroundProcessingHandler` protocol as a solution.
  - Introduce `LambdaHandlerAdapter`, which adapts handlers conforming to `LambdaHandler` with `LambdaWithBackgroundProcessingHandler`.
  - Update `LambdaCodableAdapter` to now be generic over any handler conforming to `LambdaWithBackgroundProcessingHandler` instead of `LambdaHandler`.
@aryan-25
Copy link
Contributor Author

cc @Joannis @sebsto @czechboy0 @weissi @FranzBusch @adam-fowler @0xTim @tachyonics

We have updated the proposal to address the concerns raised. We are looking forward to hearing your feedback.

@fabianfett
Copy link
Member

@aryan-25 Thanks so much for continuing to push this forward! 🙏🚀

tachyonics
tachyonics previously approved these changes Aug 21, 2024
Copy link
Contributor

@tachyonics tachyonics left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @aryan-25, this is looking great!

I am fine with having the three handler protocols; we will just need to make sure that each is documented with what use case they are recommended for and for the writer protocols what it means for the control plane to finish the response but delay returning from the handle function.

type `ByteBufferAllocator` will also be removed because (1), we generally want to reduce the number of SwiftNIO types
exposed in the API, and (2), `ByteBufferAllocator` does not optimize the allocation strategies. The common pattern
observed across many libraries is to re-use existing `ByteBuffer`s as much as possible. This is also what we do for the
`LambdaCodableAdapter` (explained in the **Codable Support** section) implementation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perfect, this is a good justification for this change!

@tachyonics tachyonics dismissed their stale review August 21, 2024 10:22

Additional comments still to be addressed.

Copy link
Member

@0xTim 0xTim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking good!

same way the lifecycles of the required services are managed, e.g.
`try await ServiceGroup(services: [postgresClient, ..., lambdaRuntime], ...).run()`.
- Dependencies can now be injected into `LambdaRuntime` — `swift-service-lifecycle` guarantees that the services will be
initialized _before_ the `LambdaRuntime`'s `run()` function is called.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What guarantees are these? ServiceLifecycle doesn't currently have any initialization order guarantees, unless I missed something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right. I was not aware of this. The proposal has been updated to remove the term "guarantees" and now states that the required services will be initialized together with LambdaRuntime.

This leads to cases where ServiceLifecycle has called LambdaRuntimes run() function, and then LambdaRuntime requests something from a service that has not been started up yet.

The solution for this has to be provided by the services, e.g. PostgresNIO hangs on such requests until the run() function has been called.

```swift
/// Wraps an underlying handler conforming to ``LambdaHandler``
/// with ``LambdaWithBackgroundProcessingHandler``.
public struct LambdaHandlerAdapter<
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear how the Event is decoded and Output encoded in the LambdaHandlerAdapter.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LambdaHandlerAdapter does not perform any encoding/decoding since it just wraps LambdaHandler with LambdaWithBackgroundProcessingHandler (both protocols have the generic Event and Output types)

LambdaCodableAdapter (which wraps LamdbaWithBackgroundProcessingHandler with StreamingLambdaHandler) does encoding/decoding through the encoder and decoder passed to its constructor.

try await outputWriter.write(result: Greeting(echoedMessage: event.messageToEcho))

// Perform some background work, e.g:
try await Task.sleep(for: .seconds(10))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this setup a lot

…ResponseWriter

Remove `~Copyable` from `LambdaResponseStreamWriter` and `LambdaResponseWriter`. Instead throw an error when `finish()` is called multiple times or when `write`/`writeAndFinish` is called after `finish()`.
@aryan-25 aryan-25 mentioned this pull request Aug 23, 2024

init(context: AWSLambdaRuntimeCore.LambdaInitializationContext) async throws {
/// Instantiate service
let client = PostgresClient(configuration: ...)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Postgres as example about something to use with AWS Lambda is quite misleading for the users for two reasons:

  • The main point of using Lambda is autoscaling, this is usually something very difficult to obtain from a RDS like Postgres and could require connection pooling features. Ideally, the use case we should present is the connection to a NO-SQL database such as DynamoDB or MongoDB.
  • In addition giving the nature of connection oriented protocol of the DB, opening the client connection during the Lambda's initialisation could lead to connection time-out due to the life cycle of the Lambda. There is no guarantee on how long a lambda will be allocated after the first usage. At some point the connection of the DB will be closed from the DB side, leaving subsequent invocation with a closed connection from the Lambda side.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main point of using Lambda is autoscaling, this is usually something very difficult to obtain from a RDS like Postgres and could require connection pooling features. Ideally, the use case we should present is the connection to a NO-SQL database such as DynamoDB or MongoDB.

While I generally agree, neither Soto nor MongoDB integrate with swift-service-lifecycle today. The goal of this new API is to integrate better with the shared building blocks of the Swift server ecosystem. Since swift-service-lifecycle is an important one, we opted for PostgresNIO as it integrates with it today.

In addition giving the nature of connection oriented protocol of the DB, opening the client connection during the Lambda's initialisation could lead to connection time-out due to the life cycle of the Lambda.

PostgresClient can lazily create connections. This is totally configurable.

At some point the connection of the DB will be closed from the DB side, leaving subsequent invocation with a closed connection from the Lambda side.

That's totally fine, the connection pool will automatically reconnect for the user.

@fabianfett
Copy link
Member

The discussion here has settled. @aryan-25 addressed most of the feedback. From my point the only open issue concerns the use of DispatchWallTime in LambdaContext. I've created an extra ticket for this #384.

Last call to all participants: I intend to merge this PR on Monday. Is there any other unaddressed feedback besides #384, that we should create an extra ticket for?

@fabianfett fabianfett merged commit 83bd667 into swift-server:main Oct 9, 2024
12 of 13 checks passed
sebsto pushed a commit to sebsto/swift-aws-lambda-runtime that referenced this pull request Oct 9, 2024
Co-authored-by: Fabian Fett <fabianfett@apple.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.