Skip to content

RFC: add debug functionality to test with mock server #69

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

tomerd
Copy link
Contributor

@tomerd tomerd commented May 5, 2020

motivation: allow end to end testing locally

changes:

  • add a Lambda+LocalServer which exposes Lambda.withLocalServer available only in DEBUG mode
  • local server can recieve POST requests with payloads on a configurable endpoint and and send them to the Lambda
  • add a "noContent" mode to Lambda runtime to allow polling

@tomerd tomerd requested a review from fabianfett May 5, 2020 03:38
public static func withLocalServer(invocationEndpoint: String? = nil, _ body: @escaping () -> Void) throws {
let server = LocalLambda.Server(invocationEndpoint: invocationEndpoint)
try server.start().wait()
defer { try! server.stop() } // FIXME:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👀

func processRequest(context: ChannelHandlerContext, request: (head: HTTPRequestHead, body: ByteBuffer?)) {
if request.head.uri.hasSuffix(self.invocationEndpoint) {
if let work = request.body {
let requestId = "\(DispatchTime.now().uptimeNanoseconds)" // FIXME:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👀 need to think of something better?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An aws request id is a UUID. So I guess we should use a uuid for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could, that will pull in Foundation

// MARK: - Local Mock Server

private enum LocalLambda {
struct Server {
Copy link
Contributor Author

@tomerd tomerd May 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should call this MockServer to make it even clearer?

@tomerd tomerd force-pushed the feature/local-server branch 2 times, most recently from 265d08e to 11bc8d1 Compare May 5, 2020 04:14
@tomerd
Copy link
Contributor Author

tomerd commented May 5, 2020

@swift-server-bot test this please

@tomerd
Copy link
Contributor Author

tomerd commented May 5, 2020

}
throw error
}
.always { result in
// we are done!
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fabianfett these are the changes I like the least. in real work lambda, this will never happen since the lambda will be frozen until work is available, but in this case we continue to poll if "noContent" is received. happy to hear better ideas

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Within the Lambda.run we already listen for a stop signal:

    internal static func runAsync(eventLoopGroup: EventLoopGroup, configuration: Configuration, factory: @escaping HandlerFactory) -> EventLoopFuture<Int> {
        Backtrace.install()
        var logger = Logger(label: "Lambda")
        logger.logLevel = configuration.general.logLevel
        let lifecycle = Lifecycle(eventLoop: eventLoopGroup.next(), logger: logger, configuration: configuration, factory: factory)
        let signalSource = trap(signal: configuration.lifecycle.stopSignal) { signal in
            logger.info("intercepted signal: \(signal)")
            lifecycle.shutdown()
        }
        return lifecycle.start().flatMap {
            return lifecycle.shutdownFuture.always { _ in
                signalSource.cancel()
            }
        }
    }

The shutdown method triggers a state change, which is only evaluated before a new cycle. That means if we are waiting for a new task cancel, shutdown() should cancel the next request.

@tomerd tomerd force-pushed the feature/local-server branch 2 times, most recently from b5a8f74 to 9863db8 Compare May 5, 2020 05:13
@tomerd
Copy link
Contributor Author

tomerd commented May 5, 2020

5.3 failure is due to a swift-nio issue on 5.3, should be fixed soon and safe to ignore for now.

Copy link
Member

@fabianfett fabianfett left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great start, there are a couple of bigger issues that we will need to tackle, but overall this is a move into the right direction!

var logger = Logger(label: "LocalLambdaServer")
logger.logLevel = configuration.general.logLevel
self.logger = logger
self.group = MultiThreadedEventLoopGroup(numberOfThreads: System.coreCount)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really want to support multiple eventLoops here? I think @weissi's main event grouplool could also be sufficient here. Could save us locks (which would make the code simpler) and therefore easier to debug. And performance shouldn't be a problem anyways.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not take over the main thread, since we are running in the same process as the lambda executor/runner (which will take over the main thread itself). But we should use only one thread though ;)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think one single threaded ELG (with the new takeover stuff) should be enough for lambda. Server & client can also share.

func processRequest(context: ChannelHandlerContext, request: (head: HTTPRequestHead, body: ByteBuffer?)) {
if request.head.uri.hasSuffix(self.invocationEndpoint) {
if let work = request.body {
let requestId = "\(DispatchTime.now().uptimeNanoseconds)" // FIXME:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An aws request id is a UUID. So I guess we should use a uuid for this?

(AmazonHeaders.requestID, pending.key),
(AmazonHeaders.invokedFunctionARN, "arn:aws:lambda:us-east-1:\(Int16.random(in: Int16.min ... Int16.max)):function:custom-runtime"),
(AmazonHeaders.traceID, "Root=\(Int16.random(in: Int16.min ... Int16.max));Parent=\(Int16.random(in: Int16.min ... Int16.max));Sampled=1"),
(AmazonHeaders.deadline, "\(DispatchWallTime.distantFuture.millisSinceEpoch)"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should add something here. Maybe even make this configurable?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably create even have a real timeout schedule here as well.

}
throw error
}
.always { result in
// we are done!
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Within the Lambda.run we already listen for a stop signal:

    internal static func runAsync(eventLoopGroup: EventLoopGroup, configuration: Configuration, factory: @escaping HandlerFactory) -> EventLoopFuture<Int> {
        Backtrace.install()
        var logger = Logger(label: "Lambda")
        logger.logLevel = configuration.general.logLevel
        let lifecycle = Lifecycle(eventLoop: eventLoopGroup.next(), logger: logger, configuration: configuration, factory: factory)
        let signalSource = trap(signal: configuration.lifecycle.stopSignal) { signal in
            logger.info("intercepted signal: \(signal)")
            lifecycle.shutdown()
        }
        return lifecycle.start().flatMap {
            return lifecycle.shutdownFuture.always { _ in
                signalSource.cancel()
            }
        }
    }

The shutdown method triggers a state change, which is only evaluated before a new cycle. That means if we are waiting for a new task cancel, shutdown() should cancel the next request.

@tomerd tomerd force-pushed the feature/local-server branch 4 times, most recently from 4857d6f to c24bce7 Compare May 6, 2020 02:44
@tomerd
Copy link
Contributor Author

tomerd commented May 6, 2020

@fabianfett i'v updated this based on #70 + fixup on top. I think it looks much better, thank you. the request/response bit is still order sensitive (since not using a dictionary any longer) but maybe good enough for what this is designed for?

followups:

  1. do we feel this is a good direction to go?
  2. anything open in the code we need to address?
  3. do we need a withLocalAPIGatewatServer variant that will wrap the invocation request in an APIGateway.Request to help build such popular Lambda functions?

wdyt?

tomerd and others added 3 commits May 7, 2020 13:26
motivation: allow end to end testing locally

changes:
* add a Lambda+LocalServer which exposes Lambda.withLocalServer available only in DEBUG mode
* local server can recieve POST requests with payloads on a configurable endpoint and and send them to the Lambda
* add a "noContent" mode to Lambda runtime to allow polling
* Don’t exit immediately
* Removed locks. Just running in one EL
@fabianfett fabianfett force-pushed the feature/local-server branch from bf4249c to 2cf4e1b Compare May 7, 2020 11:27
@tomerd
Copy link
Contributor Author

tomerd commented May 15, 2020

replaced by #73

@tomerd tomerd closed this May 15, 2020
@fabianfett fabianfett deleted the feature/local-server branch May 19, 2020 15:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants