logging_strict

joined 1 year ago
[–] logging_strict 1 points 3 hours ago* (last edited 3 hours ago)

Learn Sphinx which can mix .rst and .md files. myst-parser is the package which deals with .md files.

Just up your game a bit and you'll have variables similar to Obsidian tags which doesn't cause problems when being rendered into html web site and pdf file

[–] logging_strict 1 points 3 hours ago (1 children)

the OP is discussing one step before pandoc

[–] logging_strict 2 points 1 day ago

The way forward is to make a unittest module (unfortunately author not using pytest). With characters that are taken as an example from each of the four forms.

THEN go to town testing each of the low level functions.

Suspect the test coverage is awful. mypy and flake8 also awful.

[–] logging_strict 1 points 1 day ago

There are several forms

K1 NoSymbol K2 NoSymbol characters with lower/upper case forms

K1 K2 K1 K2 unicode <= 256 with no lower/upper case forms. Like | or + symbol

K1 K2 K3 NoSymbol 2 bytes latin extended character set

K1 K2 K3 K4 3 bytes like nuke radiation emoji

Non-authoritative guess. Having played around with xev together with onboard virtual keyboard with my symbols layout.

[–] logging_strict 1 points 1 day ago

keysym 0 and 2 are for lower and upper case. If the character has an upper and lower case equivalents.

This is documented in keysym_group when it should be documented in keysym_normalize

In that case, the group should be treated as if the first element were
the lowercase form of ``K`` and the second element were the uppercase
form of ``K``.
[–] logging_strict 1 points 2 days ago

so for 1 byte characters has both upper and lower case forms

def keysym_group(ks1, ks2):
    """Generates a group from two *keysyms*.

    The implementation of this function comes from:

        Within each group, if the second element of the group is ``NoSymbol``,
        then the group should be treated as if the second element were the same
        as the first element, except when the first element is an alphabetic
        *KeySym* ``K`` for which both lowercase and uppercase forms are
        defined.

        In that case, the group should be treated as if the first element were
        the lowercase form of ``K`` and the second element were the uppercase
        form of ``K``.

    This function assumes that *alphabetic* means *latin*; this assumption
    appears to be consistent with observations of the return values from
    ``XGetKeyboardMapping``.

    :param ks1: The first *keysym*.

    :param ks2: The second *keysym*.

    :return: a tuple conforming to the description above
    """
[–] logging_strict 1 points 2 days ago (5 children)

Solves the mystery of the repeating entries

1 2 and 3 bytes unicode to corresponding keysym

mapped into that tuple. Seems author likes the number 4.

def keysym_normalize(keysym):
    """Normalises a list of *keysyms*.

    The implementation of this function comes from:

        If the list (ignoring trailing ``NoSymbol`` entries) is a single
        *KeySym* ``K``, then the list is treated as if it were the list
        ``K NoSymbol K NoSymbol``.

        If the list (ignoring trailing ``NoSymbol`` entries) is a pair of
        *KeySyms* ``K1 K2``, then the list is treated as if it were the list
        ``K1 K2 K1 K2``.

        If the list (ignoring trailing ``NoSymbol`` entries) is a triple of
        *KeySyms* ``K1 K2 K3``, then the list is treated as if it were the list
        ``K1 K2 K3 NoSymbol``.

    This function will also group the *keysyms* using :func:`keysym_group`.

    :param keysyms: A list of keysyms.

    :return: the tuple ``(group_1, group_2)`` or ``None``
    """
[–] logging_strict 1 points 2 days ago* (last edited 2 days ago)

eggplant math cuz it isn't in SYMBOLS

>>> s = '''🍆'''
>>> dec_s = ord(s)
127814
>>> hex(dec_s)
'0x1f346'

From the source code

def char_to_keysym(char):
    """Converts a unicode character to a *keysym*.

    :param str char: The unicode character.

    :return: the corresponding *keysym*, or ``0`` if it cannot be found
    """
    ordinal = ord(char)
    if ordinal < 0x100:
        return ordinal
    else:
        return ordinal | 0x01000000

What a nutter! Comparing an int to a hex.

>>> int(0x100)
256
>>> int(0x01000000)
16777216

eggplant emoji keysym
>>> 127814 | 16777216
16905030
[–] logging_strict 1 points 2 days ago

_util/xorg_keysyms.py

Contains mapping of keysym to unicode str

type this into the terminal, it'll open up a small window. With the window in focus, type.

xev -event keyboard

type 1

From xev

keysym 0x31, 1

Corresponding entry in pynput._util.xorg_keysyms.SYMBOLS

'1': (0x0031, u'\u0031'),

so the hex is minimum four places. So 0031 instead of 0x31

From xev

keysym 0xac9, trademark

Corresponding entry in pynput._util.xorg_keysyms.SYMBOLS

'trademark': (0x0ac9, u'\u2122'),

From xev

type in nuke radiation emoji

keysym 0x1002622, U2622 bytes: (e2 98 a2) "☢"

So three bytes instead of one or two bytes

From xev

(keysym 0x7c, bar)
1 bytes: (7c) "|"

Corresponding entry in pynput._util.xorg_keysyms.SYMBOLS

'bar': (0x007c, u'\u007C'),

[–] logging_strict 2 points 3 days ago (3 children)

Took two days to think about your original post. Was thinking, hmmm this package and trouble you are having are both fresh and interesting.

Remote controlling both the mouse and keyboard seems worthy to spend time trying it out.

[–] logging_strict 2 points 1 week ago (1 children)

Very wise idea. And if you want to up your game, can validate the yaml against a schema.

Check out strictyaml

The author is ahead of his time. Uses validated yaml to build stories and weave those into web sites.

Unfortunately the author also does the same with strictyaml tests. Can get frustrating cause the tests are too simple.

[–] logging_strict 2 points 1 week ago* (last edited 1 week ago)

Curious to hear your reasoning as to why yaml is less desirable? Would think the opposite.

Surprised me with your strong opinion.

Maybe if you would allow, and have a few shot glasses handy, could take a stab at changing your mind.

But first list all your reservations concerning yaml

Relevent packages I wrote that rely on yaml

  • pytest-logging-strict

  • sphinx-external-toc-strict

4
submitted 1 month ago* (last edited 1 month ago) by logging_strict to c/python
 

Market research

This post is only about dependency management, not package management, not build backends.

You know about these:

  • uv

  • poetry

  • pipenv

You are probably not familiar with:

  • pip-compile-multi

    (toposort, pip-tools)

You are defintely unfamiliar with:

  • wreck

    (pip-tools, pip-requirements-parser)

pip-compile-multi creates lock files. Has no concept of unlock files.

wreck produces both lock and unlock files. venv aware.

Both sync dependencies across requirement files

Both act only upon requirements files, not venv(s)

Up to speed with wreck

You are familiar with .in and .txt requirements files.

.txt is split out into .lock and .unlock. The later is for packages which are not apps.

Create .in files that are interlinked with -r and -c. No editable builds. No urls.

(If this is a deal breaker feel free to submit a PR)

pins files

pins-*.in are for common constraints. The huge advantage here is to document why?

Without the documentation even the devs has no idea whether or not the constraint is still required.

pins-*.in file are split up to tackle one issue. The beauty is the issue must be documented with enough details to bring yourself up to speed.

Explain the origin of the issue in terms a 6 year old can understand.

Configuration

python -m pip install wreck

This is logging-strict pyproject.toml


[tool.wreck]
create_pins_unlock = false

[[tool.wreck.venvs]]
venv_base_path = '.venv'
reqs = [
    'requirements/dev',
    'requirements/kit',
    'requirements/pip',
    'requirements/pip-tools',
    'requirements/prod',
    'requirements/manage',
    'requirements/mypy',
    'requirements/tox',
]

[[tool.wreck.venvs]]
venv_base_path = '.doc/.venv'
reqs = [
    'docs/requirements',
]

dynamic = [
    "optional-dependencies",
    "dependencies",
    "version",
]

[tool.setuptools.dynamic]
dependencies = { file = ["requirements/prod.unlock"] }
optional-dependencies.pip = { file = ["requirements/pip.lock"] }
optional-dependencies.pip_tools = { file = ["requirements/pip-tools.lock"] }
optional-dependencies.dev = { file = ["requirements/dev.lock"] }
optional-dependencies.manage = { file = ["requirements/manage.lock"] }
optional-dependencies.docs = { file = ["docs/requirements.lock"] }
version = {attr = "logging_strict._version.__version__"}

Look how short and simple that is.

The only thing you have to unlearn is being so timid.

More venvs. More constraints and requirements complexity.

Do it

mkdir -p .venv || :;
pyenv version > .venv/python-version
python -m venv .venv

mkdir -p .doc || :;
echo "3.10.14" > .doc/python-version
cd .doc && python -m venv .venv; cd - &>/dev/null

. .venv/bin/activate
# python -m pip install wreck
reqs fix --venv-relpath='.venv'

There will be no avoidable resolution conflicts.

Preferable to do this within tox-reqs.ini

Details

TOML file format expects paths to be single quoted. The paths are relative without the last file suffix.

If pyproject.toml not in the cwd, --path='path to pyproject.toml'

create_pins_unlock = false tells wreck to not produce .unlock files for pins-*.in files.

DANGER

This is not for a faint of heart. If you can avoid it. This is for the folks who often say, Oh really, hold my beer!

For pins that span venv, add the file suffix .shared

e.g. pins-typing.shared.in

wreck deals with one venv at a time. Files that span venv have to be dealt with manually and carefully.

Issues

  1. no support for editable builds

  2. no url support

  3. no hashs

  4. your eyes will tire and brains will splatter on the wall, from all the eye rolling after sifting thru endless posts on uv and poetry and none about pip-compile-multi or wreck

  5. Some folks love having all dependency managed within pyproject.toml These folks are deranged and its impossible to convince them otherwise. pyproject.toml is a config file, not a database. It should be read only.

  6. a docs link on pypi.org is 404. Luckily there are two docs links. Should really just fix that, but it's left like that to see if anyone notices. No one did.

4
submitted 1 month ago* (last edited 1 month ago) by logging_strict to c/python
 

Finally got around to creating a gh profile page

The design is to give activity insights on:

  • what Issues/PRs working on

  • future issues/PRs

  • for fun, show off package mascots

All out of ideas. Any suggestions? How did you improve your github profile?

14
submitted 2 months ago* (last edited 2 months ago) by logging_strict to c/python
 

From helping other projects have run across a fundamental issue which web searches have not given appropriate answers.

What should go in a tarball and what should not?

Is it only the build files, python code, and package data and nothing else?

Should it include tests/ folder?

Should it include development and configuration files?

Have seven published packages which include almost all the files and folders. Including:

.gitignore,

.gitattributes,

.github folder tree,

docs/,

tests/,

Makefile,

all config files,

all tox files,

pre-commit config file

My thinking is that the tarball should have everything needed to maintain the package, but this belief has been challenged. That the tarball is not appropriate for that.

Thoughts?

 

PEP 735 what is it's goal? Does it solve our dependency hell issue?

A deep dive and out comes this limitation

The mutual compatibility of Dependency Groups is not guaranteed.

-- https://peps.python.org/pep-0735/#lockfile-generation

Huh?! Why not?

mutual compatibility or go pound sand!

pip install -r requirements/dev.lock
pip install -r requirements/kit.lock -r requirements/manage.lock

The above code, purposefully, does not afford pip a fighting chance. If there are incompatibilities, it'll come out when trying randomized combinations.

Without a means to test for and guarantee mutual compatibility, end users will always find themselves in dependency hell.

Any combination of requirement files (or dependency groups), intended for the same venv, MUST always work!

What if this is scaled further, instead of one package, a chain of packages?!

11
submitted 4 months ago* (last edited 4 months ago) by logging_strict to c/python
 

In a requirements-*.in file, at the top of the file, are lines with -c and -r flags followed by a requirements-*.in file. Uses relative paths (ignoring URLs).

Say have docs/requirements-pip-tools.in

-r ../requirements/requirements-prod.in
-c ../requirements/requirements-pins-base.in
-c ../requirements/requirements-pins-cffi.in

...

The intent is compiling this would produce docs/requirements-pip-tool.txt

But there is confusion as to which flag to use. It's non-obvious.

constraint

Subset of requirements features. Intended to restrict package versions. Does not necessarily (might not) install the package!

Does not support:

  • editable mode (-e)

  • extras (e.g. coverage[toml])

Personal preference

  • always organize requirements files in folder(s)

  • don't prefix requirements files with requirements-, just doing it here

  • DRY principle applies; split out constraints which are shared.

view more: next ›