Skip to content

Conversation

kjnsn
Copy link
Collaborator

@kjnsn kjnsn commented Feb 10, 2025

No description provided.

Copy link
Member

@backwardspy backwardspy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice work! just a couple of suggestions to consider.

def parse_config(config: str, verbose: bool) -> list[TmuxCommand]:
commands: list[TmuxCommand] = []
# TODO: Handle multiline commands that contain '\'
for line_number, line in enumerate(config.splitlines()):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for line_number, line in enumerate(config.splitlines()):
for line_number, line in enumerate(config.splitlines(), start=1):

to start counting lines at 1 rather than 0, unless i've missed a +1 somewhere else!

# TODO: Handle multiline commands that contain '\'
for line_number, line in enumerate(config.splitlines()):
cmd_re = r"\s*(\w+)\s+(-\w+)?\s*(@?[\w\-_]+)?\s*[\"']?(.*)"
m = re.match(cmd_re, line)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can re.compile this pattern once at the beginning and then match with that, rather than recompiling on each iteration of the loop

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is that a benefit?

Copy link
Member

@backwardspy backwardspy Feb 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it might not be. in theory, compiling it once and then using it many times should be kinder on system resources. however, i believe python maintains its own cache for compiled regex patterns anyway, so you may see no difference at all.
i'm also not sure how long the average tmux config is; if this loop only runs a few hundred times it would be a negligible difference.
thought it was worth pointing out for consideration regardless.

edit: i do see some improvement locally:

$ python -m timeit -s 'import re' 're.match("[a-z]", "hello world!")'
1000000 loops, best of 5: 227 nsec per loop
$ python -m timeit -s 'import re; pat = re.compile("[a-z]")' 'pat.match("hello world!")'
5000000 loops, best of 5: 83.3 nsec per loop

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is cached, and I'm not really interested in micro optimisations without a measurable performance benefit that users would notice

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as i said, just a suggestion :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants