Parsing URLs
/search?q=seiza
/settings
How do we do that? We use the elm/url
to parse the raw strings into nice Elm data structures. This package makes the most sense when you just look at examples, so that is what we will do!
Say we have an art website where the following addresses should be valid:
/topic/architecture
/topic/painting
/blog/42
/blog/123
/blog/451
/user/tom
/user/sue
/user/sue/comment/11
So we have topic pages, blog posts, user information, and a way to look up individual user comments. We would use the module to write a URL parser like this:
The Url.Parser
module makes it quite concise to fully turn valid URLs into nice Elm data!
/blog/12/the-history-of-chairs
/blog/13/the-endless-september
/blog/
/blog?q=whales
/blog?q=seiza
In this case we have individual blog posts and a blog overview with an optional query parameter. We need to add the Url.Parser.Query
module to write our URL parser this time:
The </>
and <?>
operators let us write parsers that look quite like the actual URLs we want to parse. And adding Url.Parser.Query
allowed us to handle query parameters like .
Okay, now we have a documentation website with addresses like this:
/Basics
/Maybe
/List
/List#map
/List#filter
/List#foldl
We can use the parser from Url.Parser
to handle these addresses like this:
Now that we have seen a few parsers, we should look at how this fits into a Browser.application
program. Rather than just saving the current URL like last time, can we parse it into useful data and show that instead?
The major new things are:
- Our
view
function shows different content for different addresses!
It is really not too fancy. Nice!
But what happens when you have 10 or 20 or 100 different pages? Does it all go in this one function? Surely it cannot be all in one file. How many files should it be in? What should be the directory structure? That is what we will discuss next!