Replies: 9 comments 17 replies
-
For larger |
Beta Was this translation helpful? Give feedback.
-
See also |
Beta Was this translation helpful? Give feedback.
-
Error recovery. While not required for this case, highly correlated that they'll be done together. See #96 |
Beta Was this translation helpful? Give feedback.
-
Ease of See also https://www.reddit.com/r/rust/comments/1fa0h2c/thinking_on_switching_from_nom_to_lalrpop/llq526r/ |
Beta Was this translation helpful? Give feedback.
-
CC @adamchalmers are you doing separate lexing and parsing? |
Beta Was this translation helpful? Give feedback.
-
Would it be possible to unify some of the traits? Specifcly it would be VERY nice if there was 1 big trait u had to implement which gave you virtually all the features of the libarary. Sure it would be a pain to do. But you won't have ambiguity when there are 2 implementations of similar things (like InputTake and PositionInput) and you would not know that all features are avilble for your. This is especially nice if you can hardcode a deafualt error type into that trait. So when u use a parser that has an ambitious error the deafualt trait for that input would be used. I also suspect this will make implementation on the side of libarary maintainers easier since there is less time spend on trying to figure out which traits are needed for functionality. |
Beta Was this translation helpful? Give feedback.
-
FYI I've posted #720 which adds
I'll leave it as a draft for a few days to collect feedback. |
Beta Was this translation helpful? Give feedback.
-
Again, it was a little confusing to me when approaching Which leads me to ask you: when would you recommend using winnow as a parser of lexed tokens? This might relate to the binary question in the other topic, and might be a nice element of the introduction. |
Beta Was this translation helpful? Give feedback.
-
This threw me for a loop too: Writing a seperate tokenization step, and then using the token objects/enums produced by that as the input to a second parser that outputs an AST struck me as a much cleaner/easier to understand approach that rolling it all into one, so that's what I did. After a bit of trial and error, reading the docs, and then going back and really reading the docs I had a pretty good understanding of how the library worked and ended up with a tokenizer without a huge amount of effort. After that though I realized trying to write parser functions that took anything other than &mut &str as input just resulted in fairly cryptic type errors (I also haven't written any rust for a few months though so that could be partially on me). I think a few things might have made the experience for me, and anyone else with a similar "skim read the docs and start churning out code" approach to learning new libraries a bit easier/faster:
(Also worth noting I haven't gotten very far into writing the AST generation step still - I believe I understand what I need now, but it's very likely I'll hit hurdles I don't know about yet) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
winnow
works well for parsing text, but what about lexed tokens? How can we improve this?Beta Was this translation helpful? Give feedback.
All reactions