Debugging Knative Eventing
This document is intended for people that are familiar with the object model of . You don’t need to be an expert, but do need to know roughly how things fit together.
- Setup .
This guide uses an example consisting of an event source that sends events to a function.
See for the entire YAML. For any commands in this guide to work, you must apply example.yaml:
Knative events will occur whenever a Kubernetes Event occurs in the knative-debug
namespace. We can cause this to occur with the following commands:
kubectl --namespace knative-debug run to-be-deleted --image=image-that-doesnt-exist --restart=Never
# 5 seconds is arbitrary. We want K8s to notice that the Pod needs to be scheduled and generate at least one event.
sleep 5
kubectl --namespace knative-debug delete pod to-be-deleted
Then we can see the Kubernetes Event
s (note that these are not Knative events!):
kubectl --namespace knative-debug get events
This should produce output along the lines of:
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
20s 20s 1 to-be-deleted.157aadb9f376fc4e Pod Normal Scheduled default-scheduler Successfully assigned knative-debug/to-be-deleted to gke-kn24-default-pool-c12ac83b-pjf2
You’ve applied example.yaml and you are inspecting fn
‘s logs:
kubectl --namespace knative-debug logs -l app=fn -c user-container
But you don’t see any events arrive. Where is the problem?
The first thing to check are all the created resources, do their statuses contain ready
true?
We will attempt to determine why from the most basic pieces out:
fn
- TheDeployment
has no dependencies inside Knative.svc
- TheService
has no dependencies inside Knative.chan
- TheChannel
depends on its backingchannel implementation
and somewhat depends onsub
.src
- TheSource
depends onchan
.sub
- TheSubscription
depends on bothchan
andsvc
.
fn
kubectl --namespace knative-debug get deployment fn -o jsonpath='{.status.availableReplicas}'
We want to see 1
. If you don’t, then you need to debug the Deployment
. Is there anything obviously wrong mentioned in the status
?
kubectl --namespace knative-debug get deployment fn --output yaml
If it is not obvious what is wrong, then you need to debug the Deployment
, which is out of scope of this document.
Verify that the Pod
is Ready
:
kubectl --namespace knative-debug get pod -l app=fn -o jsonpath='{.items[*].status.conditions[?(@.type == "Ready")].status}'
This should return True
. If it doesn’t, then try to debug the Deployment
using the guide.
svc
We just want to ensure this exists and has the correct name. If it doesn’t exist, then you probably need to re-apply .
Verify it points at the expected pod.
This should return a single Pod, which if you inspect is the one generated by fn
.
chan
chan
uses the . This is a very basic channel and has few failure modes that will be exhibited in chan
‘s status
.
kubectl --namespace knative-debug get channel.messaging.knative.dev chan -o jsonpath='{.status.conditions[?(@.type == "Ready")].status}'
This should return True
. If it doesn’t, get the full resource:
Next verify that chan
is addressable:
kubectl --namespace knative-debug get channel.messaging.knative.dev chan -o jsonpath='{.status.address.hostname}'
This should return a URI, likely ending in ‘.cluster.local’. If it doesn’t, then it implies that something went wrong during reconciliation. See Channel Controller.
We will verify that the two resources that the chan
creates exist and are Ready
.
Service
chan
creates a K8s Service
.
kubectl --namespace knative-debug get service -l messaging.knative.dev/role=in-memory-channel
It’s spec is completely unimportant, as Istio will ignore it. It just needs to exist so that src
can send events to it. If it doesn’t exist, it implies that something went wrong during chan
reconciliation. See Channel Controller.
src
src
is a ApiServerSource.
First we will verify that src
is writing to chan
.
kubectl --namespace knative-debug get apiserversource src -o jsonpath='{.spec.sink}'
Which should return map[apiVersion:messaging.knative.dev/v1 kind:Channel name:chan]
. If it doesn’t, then src
was setup incorrectly and its spec
needs to be fixed. Fixing should be as simple as updating its spec
to have the correct sink
(see ).
Now that we know src
is sending to chan
, let’s verify that it is Ready
.
kubectl --namespace knative-debug get apiserversource src -o jsonpath='{.status.conditions[?(.type == "Ready")].status}'
sub
sub
is a Subscription
from chan
to fn
.
Verify that sub
is Ready
:
kubectl --namespace knative-debug get subscription sub -o jsonpath='{.status.conditions[?(.type == "Ready")].status}'
This should return True
. If it doesn’t then, look at all the status entries.
kubectl --namespace knative-debug get subscription sub --output yaml
Each of the resources has a Controller that is watching it. As of today, they tend to do a poor job of writing failure status messages and events, so we need to look at the Controller’s logs.
Note
The Kubernetes Deployment Controller, which controls fn
, is out of scope for this document.
Service Controller
The Kubernetes Service Controller, controlling , is out of scope for this document.
Channel Controller
There is not a single Channel
Controller. Instead, there is one Controller for each Channel CRD. chan
uses the InMemoryChannel
Channel CRD
, whose Controller is:
See its logs with:
kubectl --namespace knative-eventing logs -l messaging.knative.dev/channel=in-memory-channel,messaging.knative.dev/role=controller
Pay particular attention to any lines that have a logging level of warning
or error
.
Source Controller
Each Source will have its own Controller. src
is a ApiServerSource
, so its Controller is:
kubectl --namespace knative-eventing get pod -l app=sources-controller
ApiServerSource Controller
The ApiServerSource
Controller is run in the same binary as some other Source Controllers from Eventing. It is:
kubectl --namespace knative-debug get pod -l eventing.knative.dev/sourceName=src,eventing.knative.dev/source=apiserver-source-controller
View its logs with:
kubectl --namespace knative-debug logs -l eventing.knative.dev/sourceName=src,eventing.knative.dev/source=apiserver-source-controller
Pay particular attention to any lines that have a logging level of warning
or error
.
Subscription Controller
The Subscription
Controller controls sub
. It attempts to resolve the addresses that a Channel
should send events to, and once resolved, inject those into the Channel
‘s spec.subscribable
.
kubectl --namespace knative-eventing get pod -l app=eventing-controller
View its logs with:
kubectl --namespace knative-eventing logs -l app=eventing-controller
Pay particular attention to any lines that have a logging level of warning
or error
.
The entire Control Plane looks healthy, but we’re still not getting any events. Now we need to investigate the data plane.
The Knative event takes the following path:
Event is generated by
src
.In this case, it is caused by having a Kubernetes
Event
trigger it, but as far as Knative is concerned, theSource
is generating the event denovo (from nothing).src
is POSTing the event tochan
‘s address,http://chan-kn-channel.knative-debug.svc.cluster.local
.The Channel Dispatcher receives the request and introspects the Host header to determine which
Channel
it corresponds to. It sees that it corresponds toknative-debug/chan
so forwards the request to the subscribers defined insub
, in particularsvc
, which is backed byfn
.fn
receives the request and logs it.
We will investigate components in the order in which events should travel.
Channel Dispatcher
The Channel Dispatcher is the component that receives POSTs pushing events into Channel
s and then POSTs to subscribers of those Channel
s when an event is received. For the in-memory-channel
used in this example, there is a single binary that handles both the receiving and dispatching sides for all in-memory-channel
Channel
s.
First we will inspect the Dispatcher’s logs to see if it is anything obvious:
kubectl --namespace knative-eventing logs -l messaging.knative.dev/channel=in-memory-channel,messaging.knative.dev/role=dispatcher -c dispatcher
Ideally we will see lines like:
{"level":"info","ts":"2019-08-16T13:50:55.424Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_receiver.go:147","msg":"Request mapped to channel: knative-debug/chan-kn-channel","knative.dev/controller":"in-memory-channel-dispatcher"}
{"level":"info","ts":"2019-08-16T13:50:55.981Z","logger":"inmemorychannel-dispatcher.in-memory-channel-dispatcher","caller":"provisioners/message_receiver.go:140","msg":"Received request for chan-kn-channel.knative-debug.svc.cluster.local","knative.dev/controller":"in-memory-channel-dispatcher"}
Which shows that the request is being received and then sent to , which is returning a 2XX response code (likely 200, 202, or 204).
However if we see something like:
Then we know there was a problem posting to http://svc.knative-debug.svc.cluster.local/
.