Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e2e test framework poc needs more flexibility #37

Closed
dprotaso opened this issue Nov 16, 2020 · 26 comments
Closed

e2e test framework poc needs more flexibility #37

dprotaso opened this issue Nov 16, 2020 · 26 comments
Assignees
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@dprotaso
Copy link
Contributor

So none of the networking conformance tests currently have levels. Only serving has some.

I think for serving a runtime conformance test will be structured as

  1. Create a Service
  2. Make a get request to get some info about the running environment
  3. Run the levelled assertions against the returned info
  4. Repeat step 3 until you have no more assertions

A problem I see is the info from step 2 needs to go to step 3. With the framework as is this is going to be cumbersome - especially if each levelled assertion has unique logic.

Originally posted by @dprotaso in #30 (comment)

@dprotaso
Copy link
Contributor Author

/assign @n3wscott

@dprotaso
Copy link
Contributor Author

Another example would be the API operations for different types

https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#service

@n3wscott
Copy link

Do you have an example test with code so I can understand?

@dprotaso
Copy link
Contributor Author

dprotaso commented Nov 18, 2020

If I was using https://github.com/knative-sandbox/reconciler-test/blob/master/pkg/test/context.go

I'd write it as follows

// see poc: https://github.com/knative/networking/blob/de41c209249935d1b1c915ccc6179f2bf60ca221/test/conformance/ingress/run_test.go
func TestService(gotest *testing.T) {
    t := globals.NewT(gotest)
   t.Stable("Service", StableAPIOps)
}

func StableAPIOps(t *test.T) {
  var s v1.Service = someTestService()
 
  t.Must("Create", func(t *test.T) {
      t.ServingClient.Create(...)
      ...
  })

  t.Must("Update", func(t *test.T) {
      t.ServingClient.Update(...)
      ...
  })

  t.Must("Get", func(t *test.T) {
      t.ServingClient.Get(...)
      ...
  })

  t.Should("Patch", func(t *test.T) {
      t.ServingClient.Patch(...)
      ...
  })

  t.Must("Delete", func(t *test.T) {
      t.ServingClient.Delete(...)
      ...
  })
}

@n3wscott
Copy link

n3wscott commented Nov 18, 2020

I would assume that looks something like:

func StableAPIOpsFeature() *feature.Feature {
  f := new(feature.Feature)
  
  f.Stable("Service").
    Must("Create", func(ctx context.Context, t *test.T) {
        servingclient.Get(ctx).Create(...)
        ...
    })).
    Must("Update", func(ctx context.Context, t *test.T) {
      servingclient.Get(ctx).Update(...)
      ...
    }).
    Must("Get", func(ctx context.Context, t *test.T) {
      servingclient.Get(ctx).Get(...)
      ...
    }).
    Should("Patch", func(ctx context.Context, t *test.T) {
      servingclient.Get(ctx).Patch(...)
      ...
    }).
    Must("Delete", func(ctx context.Context, t *test.T) {
      servingclient.Get(ctx).Delete(...)
      ...
    })

    return f
}

We need to build some support code, like implement namespace scoped clients and inject them into context: like the tbd servingclient package.

@n3wscott
Copy link

the caveat is that each test should be independent of the previous. Because if you depend on ordering, it will fail, or it will be filtered out if only Should is run... etc

@dprotaso
Copy link
Contributor Author

dprotaso commented Nov 19, 2020

the caveat is that each test should be independent of the previous. Because if you depend on ordering, it will fail, or it will be filtered out if only Should is run... etc

In the prior example I have control over the test sequence since they naturally tie into golang's test lifecycle.

This is the point I'm trying to make - this separation makes this more cumbersome

ie. another example asserting certain filesystem properties in our runtime contracts.

func TestRuntimePaths(gotest *testing.T) {
   t := globals.NewT(gotest)
   var s v1.Service = someTestService()

   t.ServingClient.Create(s)
   // once ready fetch it's runtime environment
  env := http.Get(s.Status.URL + "/runtime")

  for _, path := range runtimev1.MustFilesystemPaths {
    t.Must(path, checkPath(env, path))
  }

  for _, path := range runtimev1.MayFilesystemPaths {
    t.May(path, checkPath(env, path))
  }
}

@dprotaso
Copy link
Contributor Author

bump

@n3wscott
Copy link

n3wscott commented Nov 24, 2020

I am trading cumbersome with composability and decoupled tests.

I would write the TestRuntimePaths like this:

func TestRuntimePaths(t *testing.T) {
	ctx, env := global.Environment()

	var s v1.Service = someTestService()

	// Use the existing helpers to make a KSVC.
	env.Prerequisite(ctx, t, features.ServiceIsCreatedAndReady(s))

	// once KSVC is ready fetch it's runtime environment
	rt := http.Get(s.Status.URL + "/runtime")

	// RuntimePaths is the feature we are trying to test.
	env.Test(ctx, runtime.RuntimePaths(rt))
  
	env.Finish()
}

// ...in a runtime features package...

func RuntimePaths(env http.Response) *feature.Feature {
	f := new(feature.Feature)

	s := f.Stable("RuntimePaths")

	for _, path := range runtimev1.MustFilesystemPaths {
		s.Must("have "+path , checkPath(env, path))
	}

	for _, path := range runtimev1.MayFilesystemPaths {
		s.May("have"+path, checkPath(env, path))
	}

	return f
}

This might look like more code, but what we can do is break down the thing we are trying to assert is a feature (the payload of the runtime contract) from the how we get that response. So with some edits to the test entry point, we can test the runtime or whatever to another way of getting that data, say, a kn environment result or something like that, or a CloudRun curl on another kind of endpoint.

@dprotaso
Copy link
Contributor Author

The example above leads me to more questions/thoughts:

1) Use of Feature in lieu of a list of steps in Prerequisites

env.Prerequisite(ctx, t, features.ServiceIsCreatedAndReady(s))

Prerequisite accepting a Feature seems weird since there could be levelled requirements and/or feature state assertions that I wouldn't expect.

2) Generating state from prerequisite steps & using them in assertions

var s v1.Service = someTestService()
env.Prerequisite(ctx, t, features.ServiceIsCreatedAndReady(s))
rt := http.Get(s.Status.URL + "/runtime")

someTestService() in my mind just returns a v1.Service go struct. If that's the case is features.ServiceIsCreatedAndReady(s) meant to mutate s with the intent of setting a valid s.Status.URL?

env.Prerequisite and env.Test both take unique contexts so it seems like there's no hand off of state between the two functions. Meaning Prerequisite doesn't allow for the context to be mutated.

This leads to two ways to consume dependencies for an assertion - closures (in your example above) and the context function argument that's originally passed to Test (which I've seen include clients/informers etc.).

I'm not advocating we let env.Prerequisite modify the context. In order for that to work the context keys need to be known by the prereq & assertion steps - which I'd argue is coupling them together. Maybe a workaround would be to dynamically create assertions with a context keys as input but 🤷 this become fairly complex.

3) Coalescing of Features Steps

I was sold on the fact that we could describe features in a fluent style. I like this because having a central place to see all the aspects of a feature set is great. This is a pattern that originated from networking conformance.

ie. thus for a runtime conformance I'd like to write

func RuntimeV1Conformance() feature.Feature {
    f := new(feature.Feature)
    f.Stable("filesystem", TestRuntimePaths)
    f.Stable("http", TestRuntimeHTTP)
    f.Stable("http-upgrade", TestRuntimeHTTPUpgrade)
    // etc..
}

In the example the test setup/runner TestRuntimePaths has specific logic that's unique only to RuntimePaths. Given that I'm not sure we can still describe a holistic RuntimeV1Conformance feature if I needed to create unique services for different aspects of the runtime conformance.

It seems like I have two options:

  1. Figure out how to use something like features.ServiceIsCreatedAndReady(s) N times in my test runner
    • con:
      • changing a test requires changes in two places
      • setting up N services (on a single context?) to be consumed by N different assertions requires coordination
  2. Create the services in each assertion func(ctx.Context, t*testing.T)
    • con:
      • things that are written as steps can't be used here - ie. features.ServiceIsCreatedAndReady so there's going to be code duplication

4) t.Parallel()

Serving and Networking use t.Parallel() quite extensively and they're invoked at the feature state boundary. ie. Kingress/basic is parallel wrt. KIngress/websocket. I'm not sure how to do this with feature.Feature or why by default step assertions are all parallelized (#52) cc @slinkydeveloper

@dprotaso
Copy link
Contributor Author

dprotaso commented Nov 25, 2020

I'm beginning to wonder if feature.Feature should be called something else because the stable/alpha/beta calls are scoped to a feature.

Maybe feature.Set?

@slinkydeveloper
Copy link
Contributor

I'm not sure how to do this with feature.Feature or why by default step assertions are all parallelized (#52) cc @slinkydeveloper

@dprotaso since you want to parallelize all tests, why not? I'm not sure I get your question

@dprotaso
Copy link
Contributor Author

@dprotaso since you want to parallelize all tests, why not? I'm not sure I get your question

Shouldn't test authors control the flow? What if assertions have side-effects on the remote object being tested?

This seems relevant to #3 - To guarantee test isolation I would have to change my feature definition either by:

  1. Taking what would be in feature.Setup and moving it into every Step - ie. creating N Knative Services instead of one.
  2. Creating duplicate feature.Feature definitions with different Steps on each

@slinkydeveloper
Copy link
Contributor

Shouldn't test authors control the flow? What if assertions have side-effects on the remote object being tested?

An assertion by definition shouldn't have a side effect right? Are you sure that particular assertion shouldn't live in the setup phase?

@dprotaso
Copy link
Contributor Author

dprotaso commented Nov 25, 2020

An assertion by definition shouldn't have a side effect right?

Yup - I agree.

Are you sure that particular assertion shouldn't live in the setup phase?

Ideally it should - but then I'm forced to split my feature.Feature definition. With the side effects being:

  1. the same issue I mentioned above where some arguments are passed via closures and some via context
  2. env.Test() doesn't run in parallel to other env.Test() invocations. Splitting features would actually slow things down - unless I added more test runners func Test...(t *testing.T)

@dprotaso dprotaso changed the title e2e test framework poc needs more flexibility with levels e2e test framework poc needs more flexibility Nov 25, 2020
@slinkydeveloper
Copy link
Contributor

Ideally it should - but then I'm forced to split my feature.Feature definition

Can you provide a use case for that? It might help me reason on the issue

@dprotaso
Copy link
Contributor Author

dprotaso commented Nov 25, 2020

I mentioned it all here see point 2): #37 (comment)

Another hypothetical

kingress := new(feature.Feature)
// 
kingress.Stable("websocket").Must("receive traffic", AssertWebSocketTraffic())
kingires.Stable("http2").Must("receive traffic", AssertHTTP2Traffic())

If each Stable feature requires a different KIngress configuration where should they be setup?

  1. Inside Assert*Traffic calls?
    • con: not really using feature.Setup anymore so you can't re-use steps
  2. Setup on the feature?
    • con: all the assertions are blocked until all the KIngresses are ready & how do you transfer the right ingress endpoint to the assertion?
  3. Split the feature into two?
    • con: more boilerplate, tests run linearly unless you add more Test funcs

wrt. to side-effects this example doesn't have one - I'm just highlighting a use case and feature split implications

@slinkydeveloper
Copy link
Contributor

I mentioned it all here see point 2)

This is kinda similar to the discussion happening here: #51. My understanding is that to generate state you should use Setup. If you need to generate state, then assert, then generate state and then assert again you need to develop 2 features.

I see this even more clearly in your sample with kingress, where yes there is more boilerplate, but it clearly shows how websocket and http2 testing should be 2 different features.

Maybe what might be useful is something like "feature template": in your case the setup and teardown is the same except (i guess) one step to trigger the state change to stimulate the kingress. But, assuming you create feature websocket and http2, you could mostly share the same setup and teardown.

So what we could do is to create a sort of "template" that you can use to define common setup and teardown for more features: the websocket and http2 features will eventually share the same kingress data plane feature template. But still, "at runtime", those features then are executed separately, so the setup and teardown steps are in fact repeated.

@n3wscott
Copy link

The name feature.Feature could be wrong, I have been more focused on how these things are composable and the signatures of the steps and feature providers.

  1. Use of Feature in lieu of a list of steps in Prerequisites

Ville wanted a setup step that was external to the Test or Feature to get the env to a state that is required by orthogonal to the test you would like to preform. In Eventing this would be installing a class of Broker, in Serving this would be configuring the ingress type. To test out the usage I thought it would be handy to use feature.Feature (bikeshed the name) as a shortcut to make a env.Prerequisite in the same signature as env.Test, it is for the author, reporter and debugger's convence to make it clear what is being tested or asserted inside the Prerequisites phase is independent but required for the Test phase.

  1. Generating state from prerequisite steps & using them in assertions

I think var s v1.Service = someTestService() should have been var s *v1.Service = someTestService() and it will work out.

This leads to two ways to consume dependencies for an assertion - closures (in your example above) and the context function argument that's originally passed to Test (which I've seen include clients/informers etc.).

The framework should provide the hooks but make no opinion on which way is best. For some cases having some magic thing in context might be the best, like a set of namespaced clients setup in some Prerequisite phase serving likes to use.

I'm not sure we can still describe a holistic RuntimeV1Conformance feature.

I don't think you can or want to. I would see this written as a series of Test calls on an environment with a list of features that are required to pass conformance:

// RuntimeV1Conformance
func TestRuntimeV1Conformance(t *testing.T) {
	ctx, env := global.Environment()

	env.Test(ctx, t, conformance.Feature1())
	env.Test(ctx, t, conformance.Feature2())
	env.Test(ctx, t, conformance.Feature3())
	env.Test(ctx, t, conformance.Feature4())
	<... etc>

	env.Finish()
}

The results of this will be collected into a report that is more easily consumable and understood because the Test phase focused on an aspect of conformance. (that is my hope)

RuntimeV1Conformance feature if I needed to create unique services for different aspects of the runtime conformance

I would see this as something like:

// RuntimeV1Conformance
func TestRuntimeV1Conformance(t *testing.T) {
	ctx, env := global.Environment()
	s := "ksvc-name"
	env.Precondition(ctx, t, conformance.GivenSerivce(s))

	env.Test(ctx, t, conformance.Feature1(s))
	env.Test(ctx, t, conformance.Feature2(s))
	env.Test(ctx, t, conformance.Feature3(s))
	env.Test(ctx, t, conformance.Feature4(s))
	<... etc>

	env.Finish()
}
  1. t.Parallel()

Big plus one, we need to support this, the runner code is total PoC. I am trying to focus on the following things:

  1. Features and Steps are venderable cross projects.
  2. Several Features can be run on a single env.
  3. StepFn has no external dependencies, even to the framework unless you are opting into some base feature.

I have ran into some struggles so far with the PoC. One being the timing and isolation of the Steps makes it hard to pass results, so the step and feature isolation cause you to think about how to compose the test so it has no or few dependencies. This results in each step having a bit more code than you might expect, but that also results in the steps being composable in new ways than you originally wrote them in, as an example:

// TestBrokerAsMiddleware
func TestBrokerAsMiddleware(t *testing.T) {
	t.Parallel()

	ctx, env := global.Environment(
		knative.WithKnativeNamespace(system.Namespace()),
		knative.WithLoggingConfig,
		knative.WithTracingConfig,
		k8s.WithEventListener,
	)

	// Install and wait for a Ready Broker.
	env.Prerequisite(ctx, t, features.BrokerGoesReady("default", "MTChannelBroker"))

	// Test that a Broker can act as middleware.
	env.Test(ctx, t, features.BrokerAsMiddleware("default"))

	env.Finish()
}

Here, I need a Broker to be ready, but the focus of the test is not a ready Broker. I wanted to write a feature that assumes for a ready Broker of a given name, I can pass events through it. This means that I can vendor the features and reuse this in other downstream repos can leverage this same test:

// <... In eventing-rabbitmq ...>
// TestBrokerAsMiddleware
func TestBrokerAsMiddleware(t *testing.T) {
	t.Parallel()

	ctx, env := global.Environment(
		knative.WithKnativeNamespace(system.Namespace()),
		knative.WithLoggingConfig,
		knative.WithTracingConfig,
		k8s.WithEventListener,
	)

	// Create a RabbitmqCluster in the env, the CO that creates the underlying RabbitMQ Broker (not knative).
	env.Prerequisite(ctx, t, rabbitfeatures.RabbitMQBrokerIsCreated())

	// Install and wait for a Ready Broker.
	env.Prerequisite(ctx, t, features.BrokerGoesReady("default", "RabbitMQBroker"))

	// Test that a Broker can act as middleware.
	env.Test(ctx, t, features.BrokerAsMiddleware("default"))

	env.Finish()
}

So the downstream repo has opt'ed to include this test, but they only have to add the test entry point, not the features. The reasoning here is likely the downstream needs to do some additional setup like above.

@dprotaso
Copy link
Contributor Author

then generate state and then assert again you need to develop 2 features.

Then I'm confused by the feature state methods (Alpha, Beta, Stable etc.) on the Feature struct. To my third point above and following observation I think it's important to be able to go to a single place to see a group of related features together and their level of maturity. ie. like the KIngress example

A template approach would work - but I was pointing out in 2) that things neither clearly defined nor consistent

@n3wscott
Copy link

I would need to know more about KIngress to be able to answer in full, but I have been assuming a Feature tests a contained set of functionality, and variations would be passed down to it.

I think you can compose the test you are wanting to, and you will have to do a bit more work when trying to special case a particular feature of an implementation, if that is not generally testable by all implementations.

@dprotaso
Copy link
Contributor Author

1) Use of Feature in lieu of a list of steps in Prerequisites

it is for the author, reporter and debugger's convence to make it clear what is being tested or asserted inside the Prerequisites phase is independent but required for the Test phase.

Whatever goes here shouldn't have state (alpha, beta, stable) or levels (must, should, may) decorations since those could be skipped via environment flags etc.

ie. What if I was only testing alpha features of a broker - the broker ready feature passed to a prereq wouldn't run

https://github.com/knative/eventing/blob/fe1b34c4c084eaca1f964e09bd791a428c7bb8cf/test/rekt/features/broker_feature.go#L42-L44

2) Generating state from prerequisite steps & using them in assertions

I think var s v1.Service = someTestService() should have been var s *v1.Service = someTestService() and it will work out.

The pointer doesn't change my interpretation - s still needs to be mutated or refetched to get additional info for the subsequent steps.

3) Coalescing of Features Steps

I'm not sure we can still describe a holistic RuntimeV1Conformance feature.

I don't think you can or want to. I would see this written as a series of Test calls on an environment with a list of features that are required to pass conformance:

We do this in networking currently and it good for discoverability. A side effect is diffs become very clear https://github.com/knative/networking/pull/277/files. Another example

RuntimeV1Conformance feature if I needed to create unique services for different aspects of the runtime conformance

I would see this as something like:

// RuntimeV1Conformance
func TestRuntimeV1Conformance(t *testing.T) {
	ctx, env := global.Environment()
	s := "ksvc-name"
	env.Precondition(ctx, t, conformance.GivenSerivce(s))

	env.Test(ctx, t, conformance.Feature1(s))
	env.Test(ctx, t, conformance.Feature2(s))
	env.Test(ctx, t, conformance.Feature3(s))
	env.Test(ctx, t, conformance.Feature4(s))
	<... etc>

	env.Finish()
}

I see only one service in this example. Also we want to group relevant conformance features to be run together to make it easier for downstream folks to run the correct set of tests. Right now in networking this done using an exported RunConformance function.

The results of this will be collected into a report that is more easily consumable and understood because the Test phase focused on an aspect of conformance. (that is my hope)

Can you elaborate?

@dprotaso
Copy link
Contributor Author

I would need to know more about KIngress to be able to answer in full, but I have been assuming a Feature tests a contained set of functionality, and variations would be passed down to it.

I think you can compose the test you are wanting to, and you will have to do a bit more work when trying to special case a particular feature of an implementation, if that is not generally testable by all implementations.

I would need to know more about KIngress to be able to answer in full, but I have been assuming a Feature tests a contained set of functionality, and variations would be passed down to it.

It's conformance so there's are no variations on the features being tested. The only thing that differs is the ingress installation and maybe configuring some global test properties (ie. does DNS work, which ingress class etc.)

@n3wscott
Copy link

I have made several issues to continue the discussion in the forks we have made:

for 1):

Framework should report Prerequisite features with Asserts #64

for 2):

Framework needs to help test authors understand how to pass state between steps #66

for 3):

Framework needs a way to group Features into Sets #65

The results of this will be collected into a report that is more easily consumable and understood because the Test phase focused on an aspect of conformance. (that is my hope)

Can you elaborate?

I think we need to define this a bit more. The thinking is the runner can get metadata out of the feature and generate data around the run of the test and conformance is somehow different, but I think we are waiting for the conformance group to ask for data.

@dprotaso
Copy link
Contributor Author

dprotaso commented Nov 26, 2020

Tagging @nak3 @ZhiminXiang @tcnghia - can someone from networking start taking a look at the current framework and the open issues Scott created. I don't want be the sole person representing the group.

It may be worth doing a small POC with one or two KIngress features.

@github-actions
Copy link

This issue is stale because it has been open for 90 days with no
activity. It will automatically close after 30 more days of
inactivity. Reopen the issue with /reopen. Mark the issue as
fresh by adding the comment /remove-lifecycle stale.

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

3 participants