Add 'infra/website/' from commit '275bec94f9f9e221bfddeb06ca7d5b87068eb7a0'

git-subtree-dir: infra/website
git-subtree-mainline: c4625b175f8200f643fd6e11010932ea44c78433
git-subtree-split: 275bec94f9
wip/yesman
Katharina Fey 4 years ago
parent 221c787583
commit 6446d904a6
  1. 8
      infra/website/.gitignore
  2. 3
      infra/website/.vscode/settings.json
  3. 674
      infra/website/LICENSE
  4. 124
      infra/website/Makefile
  5. 68
      infra/website/README.md
  6. 11
      infra/website/blog_update.sh
  7. BIN
      infra/website/content/.well-known/openpgpkey/hu/nzn5f4t6k15893omwk19pgzfztowwkhs
  8. 0
      infra/website/content/.well-known/openpgpkey/policy
  9. 167
      infra/website/content/555F2E4B6F87F91A4110.txt
  10. 33
      infra/website/content/blog/000_jolly_christmas.md
  11. 38
      infra/website/content/blog/001_jolly_christmas_update.md
  12. 49
      infra/website/content/blog/002_open_plantbot.md
  13. 25
      infra/website/content/blog/050_gsoc_1.md
  14. 37
      infra/website/content/blog/051_gsoc_2.md
  15. 69
      infra/website/content/blog/052_gsoc_3.md
  16. 20
      infra/website/content/blog/094_post_camp2015.md
  17. 59
      infra/website/content/blog/095_recovering_luks.md
  18. 27
      infra/website/content/blog/096_another_blog_update.md
  19. 71
      infra/website/content/blog/097_post_33c3.md
  20. 114
      infra/website/content/blog/098_super_ui.md
  21. 60
      infra/website/content/blog/099_moonscript.md
  22. 14
      infra/website/content/blog/100_rebooting.md
  23. 57
      infra/website/content/blog/101_rust_is_awesome.md
  24. 115
      infra/website/content/blog/102_home_manager.md
  25. 152
      infra/website/content/blog/103_rust_2019.md
  26. 107
      infra/website/content/blog/104_35c3.md
  27. 168
      infra/website/content/blog/105_allocation.md
  28. 173
      infra/website/content/blog/106_encrypted_zfs.md
  29. 92
      infra/website/content/blog/107_usable_gpg.md
  30. 42
      infra/website/content/blog/108_public_inbox.md
  31. 114
      infra/website/content/blog/109_nix_ocitools.md
  32. 85
      infra/website/content/blog/110_labels.md
  33. 183
      infra/website/content/blog/111_rust_2020.md
  34. 111
      infra/website/content/blog/112_p1_primitivism.md
  35. 44
      infra/website/content/blog/113_another_decade.md
  36. 42
      infra/website/content/blog/114_design_update.md
  37. 362
      infra/website/content/blog/115_git_mail.md
  38. 43
      infra/website/content/blog/116_1_pandemic_politics.md
  39. 55
      infra/website/content/blog/116_how_to_run_your_community.md
  40. 163
      infra/website/content/blog/117_on_gender.md
  41. 232
      infra/website/content/blog/118_the_good_place.md
  42. 12
      infra/website/content/blog/xxx_autonomous_tech.md
  43. 118
      infra/website/content/blog/xxx_issue_trackers.md
  44. 25
      infra/website/content/blog/xxx_no_google.md
  45. 19
      infra/website/content/blog/xxx_sieve.md
  46. 122
      infra/website/content/downloads/antifa-or-gtfo.svg
  47. BIN
      infra/website/content/downloads/cuckoo_hashing.pdf
  48. 213
      infra/website/content/downloads/politische-aktion.svg
  49. BIN
      infra/website/content/images/banner_bg2x.png
  50. BIN
      infra/website/content/images/banners/plantb0t_revA.png
  51. BIN
      infra/website/content/images/cf_disk.png
  52. BIN
      infra/website/content/images/cf_disk1.png
  53. BIN
      infra/website/content/images/christmas_bauble_pcb.jpg
  54. BIN
      infra/website/content/images/christmas_bauble_pcb.png
  55. BIN
      infra/website/content/images/christmas_bauble_render.png
  56. BIN
      infra/website/content/images/example.png
  57. BIN
      infra/website/content/images/favicon.ico
  58. BIN
      infra/website/content/images/flora_pinout.png
  59. BIN
      infra/website/content/images/flora_withleds.jpg
  60. BIN
      infra/website/content/images/front_matrix_background.png
  61. BIN
      infra/website/content/images/gameofcodes/series01/01_setup_ui.png
  62. BIN
      infra/website/content/images/gameofcodes/series01/02_setup_ui.png
  63. BIN
      infra/website/content/images/gameofcodes/series01/04_eclipse.png
  64. BIN
      infra/website/content/images/gameofcodes/series01/05_eclipse.png
  65. BIN
      infra/website/content/images/gameofcodes/series01/06_eclipse.png
  66. BIN
      infra/website/content/images/gameofcodes/series01/07_gamechange.png
  67. BIN
      infra/website/content/images/gameofcodes/series02/01_framelife.png
  68. BIN
      infra/website/content/images/gameofcodes/series02/02_createclass.png
  69. BIN
      infra/website/content/images/gameofcodes/series03/01_badrotation.gif
  70. BIN
      infra/website/content/images/gameofcodes/series03/02_rotating.gif
  71. BIN
      infra/website/content/images/gameofcodes/series04/01_800x600.png
  72. BIN
      infra/website/content/images/gameofcodes/series04/02_720p.png
  73. BIN
      infra/website/content/images/gameofcodes/series04/03_1080p.png
  74. BIN
      infra/website/content/images/gameofcodes/series04/04_1440p.png
  75. BIN
      infra/website/content/images/gsoc/00_acceptance.png
  76. BIN
      infra/website/content/images/gsoc/01_debugger.png
  77. BIN
      infra/website/content/images/gsoc/02_cryptoui.png
  78. BIN
      infra/website/content/images/jabber/pidgin1.png
  79. BIN
      infra/website/content/images/jabber/pidgin2.png
  80. BIN
      infra/website/content/images/jabber/pidgin3.png
  81. BIN
      infra/website/content/images/jabber/pidgin4.png
  82. BIN
      infra/website/content/images/jabber/pidgin5.png
  83. BIN
      infra/website/content/images/jabber/pidgin6.png
  84. BIN
      infra/website/content/images/libgdx_ui/01_base_problem.png
  85. BIN
      infra/website/content/images/libgdx_ui/02_ui_structure.png
  86. BIN
      infra/website/content/images/logo3.png
  87. BIN
      infra/website/content/images/lua_moon_banner.jpg
  88. BIN
      infra/website/content/images/lua_moon_banner.png
  89. BIN
      infra/website/content/images/omnitool_background.jpg
  90. BIN
      infra/website/content/images/omnitool_background2.jpg
  91. BIN
      infra/website/content/images/plantb0t_RevA_front.png
  92. BIN
      infra/website/content/images/plantb0t_RevA_naked.png
  93. BIN
      infra/website/content/images/rad1o_badge.png
  94. BIN
      infra/website/content/images/rdb_article_banner.png
  95. BIN
      infra/website/content/images/reedb_banner.png
  96. 0
      infra/website/content/images/silly_no_visitors_blog.png
  97. BIN
      infra/website/content/images/ws_2812b_single.png
  98. 1
      infra/website/content/keys.txt
  99. 9
      infra/website/content/pages/keys.md
  100. 22
      infra/website/content/pages/legal.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -0,0 +1,8 @@
# Python and pelican directories
env/
output/
# Weird files
**/*.pyc
*.pid
.directory

@ -0,0 +1,3 @@
{
"python.pythonPath": "${workspaceFolder}/env/bin/python"
}

@ -0,0 +1,674 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

@ -0,0 +1,124 @@
PY?=python3
PELICAN?=pelican
PELICANOPTS=
BASEDIR=$(CURDIR)
INPUTDIR=$(BASEDIR)/content
OUTPUTDIR=$(BASEDIR)/output
CONFFILE=$(BASEDIR)/pelicanconf.py
PUBLISHCONF=$(BASEDIR)/publishconf.py
FTP_HOST=localhost
FTP_USER=anonymous
FTP_TARGET_DIR=/
SSH_HOST=localhost
SSH_PORT=22
SSH_USER=root
SSH_TARGET_DIR=/var/www
S3_BUCKET=my_s3_bucket
CLOUDFILES_USERNAME=my_rackspace_username
CLOUDFILES_API_KEY=my_rackspace_api_key
CLOUDFILES_CONTAINER=my_cloudfiles_container
DROPBOX_DIR=~/Dropbox/Public/
GITHUB_PAGES_BRANCH=gh-pages
DEBUG ?= 0
ifeq ($(DEBUG), 1)
PELICANOPTS += -D
endif
RELATIVE ?= 0
ifeq ($(RELATIVE), 1)
PELICANOPTS += --relative-urls
endif
help:
@echo 'Makefile for a pelican Web site '
@echo ' '
@echo 'Usage: '
@echo ' make html (re)generate the web site '
@echo ' make clean remove the generated files '
@echo ' make regenerate regenerate files upon modification '
@echo ' make publish generate using production settings '
@echo ' make serve [PORT=8000] serve site at http://localhost:8000'
@echo ' make serve-global [SERVER=0.0.0.0] serve (as root) to $(SERVER):80 '
@echo ' make devserver [PORT=8000] start/restart develop_server.sh '
@echo ' make stopserver stop local server '
@echo ' make ssh_upload upload the web site via SSH '
@echo ' make rsync_upload upload the web site via rsync+ssh '
@echo ' make dropbox_upload upload the web site via Dropbox '
@echo ' make ftp_upload upload the web site via FTP '
@echo ' make s3_upload upload the web site via S3 '
@echo ' make cf_upload upload the web site via Cloud Files'
@echo ' make github upload the web site via gh-pages '
@echo ' '
@echo 'Set the DEBUG variable to 1 to enable debugging, e.g. make DEBUG=1 html '
@echo 'Set the RELATIVE variable to 1 to enable relative urls '
@echo ' '
html:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)
clean:
[ ! -d $(OUTPUTDIR) ] || rm -rf $(OUTPUTDIR)
regenerate:
$(PELICAN) -r $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS)
serve:
ifdef PORT
cd $(OUTPUTDIR) && $(PY) -m pelican.server $(PORT)
else
cd $(OUTPUTDIR) && $(PY) -m pelican.server
endif
serve-global:
ifdef SERVER
cd $(OUTPUTDIR) && $(PY) -m pelican.server 80 $(SERVER)
else
cd $(OUTPUTDIR) && $(PY) -m pelican.server 80 0.0.0.0
endif
devserver:
ifdef PORT
$(BASEDIR)/develop_server.sh restart $(PORT) > /dev/null
else
$(BASEDIR)/develop_server.sh restart > /dev/null
endif
stopserver:
$(BASEDIR)/develop_server.sh stop
@echo 'Stopped Pelican and SimpleHTTPServer processes running in background.'
publish:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(PUBLISHCONF) $(PELICANOPTS)
ssh_upload: publish
scp -P $(SSH_PORT) -r $(OUTPUTDIR)/* $(SSH_USER)@$(SSH_HOST):$(SSH_TARGET_DIR)
rsync_upload: publish
rsync -e "ssh -p $(SSH_PORT)" -P -rvzc --delete $(OUTPUTDIR)/ $(SSH_USER)@$(SSH_HOST):$(SSH_TARGET_DIR) --cvs-exclude
dropbox_upload: publish
cp -r $(OUTPUTDIR)/* $(DROPBOX_DIR)
ftp_upload: publish
lftp ftp://$(FTP_USER)@$(FTP_HOST) -e "mirror -R $(OUTPUTDIR) $(FTP_TARGET_DIR) ; quit"
s3_upload: publish
s3cmd sync $(OUTPUTDIR)/ s3://$(S3_BUCKET) --acl-public --delete-removed --guess-mime-type
cf_upload: publish
cd $(OUTPUTDIR) && swift -v -A https://auth.api.rackspacecloud.com/v1.0 -U $(CLOUDFILES_USERNAME) -K $(CLOUDFILES_API_KEY) upload -c $(CLOUDFILES_CONTAINER) .
github: publish
ghp-import -m "Generate Pelican site" -b $(GITHUB_PAGES_BRANCH) $(OUTPUTDIR)
git push origin $(GITHUB_PAGES_BRANCH)
.PHONY: html help clean regenerate serve serve-global devserver publish ssh_upload rsync_upload dropbox_upload ftp_upload s3_upload cf_upload github

@ -0,0 +1,68 @@
# fun memory violations
This is my website, running at https://spacekookie.de. It's built
with Pelican and uses my own theme, called `crumbs` (because
kookies...).
The theme itself is pretty easy, only implementing the bits that I
need, and using some components to deduplicate template code.
There's a ["permadraft"] folder of articles that never quite made it.
Some of them are farely fleshed out but either the time to publish
them passed or I otherwise thought it'd be a bad idea ot put them on
the blog.
Their HTML pages are still being built and published, but not included
in any index page (like `blog`). If you can find one, feel free to
hot-link to it.
## How to build
The easiest way to build the website is with [nix]. Simply run
`nix-shell` in this directory to install require dependencies. Then
you can use `make` to get access to a whole bunch of website commands,
such as `build`, or `devserver`. The dev server is hosted on port
8000.
**Manual install**
If you don't use nix, you need to install `python3` and `pip`. The
python dependencies are `pelican`, `markdown` and `webassets`. Please
for the love of god use a `virtualenv` 😬.
```bash
pip install pelican markdown webassets
pelican content
make devserver
```
## How to contribute
This repository has recently moved from Github to [sourcehut]. And
while I will still (infrequently) mirror the repository to github, I
don't want to accept contributions there anymore.
I have a [meta issue tracker][tracker], where you can post issues
about any of my projects, [in theory, without requiring
registration][bug]. Alternatively, you can send me a patch via e-mail
either to my personal address, or to my [public-inbox].
["permadraft"]: /~spacekookie/website/tree/master/content/permadraft
[nix]: https://nixos.org/nix
[sourcehut]: https://git.sr.ht/~spacekookie/website
[tracker]: https://todo.sr.ht/~spacekookie/meta
[bug]: https://todo.sr.ht/~sircmpwn/todo.sr.ht/103
[public-inbox]: https://lists.sr.ht/~spacekookie/public-inbox
## License
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or (at
your option) any later version.
This program is distributed in the hope that it will be useful, but
WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
General Public License for more details.

@ -0,0 +1,11 @@
# Get to the right location
sudo lxc-attach -n spacekookie
cd /var/www/website
# Update the data
git pull
# Re-generate
rm -rf output/
. env/bin/activate
pelican content/

@ -0,0 +1,167 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQINBFsOuoQBEADXJ7bgMzIwmlvm8cJJqwN5yogK+mFA7k29uyqbcIuWLqx5sVFp
k/eB1kS9S+TbNmiIhIkHoDJWkPfpwtt/BDXtkFmYEB8uXtw7nqUFbxNwdA5koVAg
v/5OodD25HemDnnbZzl39ZQMXnwnGmWG7RZZEGJWPhj88gLozH9Qw/zrGG0cAv51
9J/lOJQxxk0dzdAzK/ZquiURtgtVL6EfVhY/Ah1c/0HNMaKHtJeoaHwH0A7Qi27c
udLyaBnZxIsvNHvrn0HGEVqlkyLap+luaPRba8CIE4Q/U0XcZanSps9BDVFbSHak
rMYkfBy5R4b1KdFuGaLZ5d2MPNJbDZdlm4gP+LA8rHDpX8oE/rYEftwObW2IEg8T
I9fhHurgRr0yQGV83ZkYVWJyYch4LIOCg252O+Aip3UGx4Qgit5MO/jmZVMSaL5F
yH8hx0mmoUSeF4GgE+ZRPgGi4HSWfQrgYZDgHPB4QuGtOShkEr+Ebb/JbRheD35g
4bnw2WKrpZDuCg9bg6ZTH20ZhFIqEscr9FeLMgLjbnwRnYTRWlwrexCPJJCi/9Vm
DxU3KiWH2SKRobRQQET6s98UcLQVmkVrWCBZORoTjjLIEMookdE/3uHABd7DGpAj
85Nv8y4SwiOnHVXba3Uz/dJ1KZD4ME4xciHb8XsjKbgPPcjD6VpW96A4tQARAQAB
tCVLYXRoYXJpbmEgRmV5IDxrb29raWVAc3BhY2Vrb29raWUuZGU+iQJOBBMBCAA4
FiEEVV8uS2+H+RpBEGaekHNKnmGcimwFAlsOuoQCGwMFCwkIBwIGFQoJCAsCBBYC
AwECHgECF4AACgkQkHNKnmGcimxfOA/9Er0XA2fNJ2Jhis3GiPi9KFjLSgxgIHjg
j7ekgARyPwlIcKEcAgWpRrLcEUOD+FaE//zNGcMjDdSUcuo65NSPT5WiBCqu6ao0
D6LIPzax4MBo2WsSELhgBoFFB07bH9BImsp9jrhrhARGk4E/MPAAy7dqh3/6/Sln
ocOrSYDRVONfCO/NqwITQ7ExE6VdRmhEw+dFjCJR5GaLhzK6dXDq2l7VfKQE5Qwo
gfWWnPiDFNIlSTpnNSsbuQGzzQ0iQTlD2Ce09lGpZETvSNKE5MntuCvDAxHP1LCP
RtcIIv/AEfAn8LDUEewKsrYasQ4LeDt+8D+2EQxNU+TB3KuFcxqjZHiYy5zKj4KZ
6NP2xFAVjRShPv/R5kngrcB/2ioGlaaKQNSblpMr6UCjrEMh1vkboG8nVwqO/yGQ
sfbLv6Jyqs1/jGQAz9k132BMmzIfcjc9mVULzZy9Fd/hCQiojgEUCMQDPc1hdU0O
fgWuZKtaabbadGyugoFUiLHlLzpYKVMRw3uXiEM+QT713z7C9BA+/5oPu/6g6tLu
JcqY7hjLU49idF2RUeTTUHnSVADOr5o0Vkwr7aAOiwPzz53MDyrBLSnmWzE12C0M
q72HOCRe0x2IJe7EnShlAJvk1FYhgXlN7a+jGNZYQ5Du0cidSYAAndBu2/ILw3FY
dLuwbdGF2Va5Ag0EWxEoWAEQAO+7WLTQQiCVh0d5cz1vWACO3Beas3MFvdyCw7my
xUOD+pZD0yfwBP4BLV16HB0IueAlBVhV9FygRpwI7zK4sgD6huotNcguun0NGqvG
Hh3fmvvqruE+W3/EMtXwLCxaOesDVgpzrfan9B57f32WZaXhGx6orw7qeol0Rgt9
Zphu5WBUCMG4bI9tUC5zj7SM7N8GBRIgXnwr2bf/Uc7bqmspD0Sikyjr0dB3vJjF
cMDmWoHZ/ho2gNBiz2a5KPt7qTgrjP40goE/4g+p4ap5g1Mf9KnQeR7E3J4iD4HZ
/J1ezucFW1YCsDWGtdkAtwZxqIj821FsLbMVl+7G2g2HupvZrGXYLmrrbsnxfXGp
h0R6/RSnR6yFj0J0pgjKcFcnUPvj2EGCqKnkO+gvKd2o7NWF6kQBKNvnHTJoP33s
9uhi7c5A1nDyfcry0NScbb/8gyBBf9DuweSHiD4oApzyxjBnxpY72Gw9FR40kJzE
rJwZkFTH5wD3rRQbwFjjtADRZ9GpPOmaYS2OcT+28M/nH1F7J1yFL6T5zcPReeK1
oR97CMRJLQOUTM/XpfQFvt/wVSFgLYsuuMArip0Zv//gpMmFQSPIQdDHsMSD6WdD
7aXlEen7I4YnQnQ4qwEYN68rPU/Ajms/AutmicBR7KwoGnVF53hy17MiOqcXnQRQ
iRG1ABEBAAGJAjYEKAEIACAWIQRVXy5Lb4f5GkEQZp6Qc0qeYZyKbAUCXQurawId
AQAKCRCQc0qeYZyKbDc4D/9xru5u61yWlmiYobLZ/vsTvNLSVGM73QZLZ74/EM1H
X4J7ZXAGuZe7cJZ8i78QFa07E5DhOGjFPy3UqT3dr2DLhVfLlhXHmGAntOWAedTM
Hg3t32AKsG+cvM1XdMq3quaYU1lsI5xZ73VMyRbL/f6hpSBT7Ib9FZtbEPwoUh7Y
7vAYPvyLgvZy5N+O4GH2pAXrpJZXVEq7DvvAcgjOr3ndq3vKSQZGiy2b0Jc4SaVA
HILJaetii/jEU52RRk/a+rqFK5jxQ+BrBUdbMGNPgvZSFoVvoJ+gb/vEbvtKelkM
lP7kU/gbStGVJDoGkHtuJyfzfjVsSTjkMMyhq1X6t+DrWkVdVNdrekcOKU3n121y
zbodFDrqiK2hVVS9jsg97ocPfXVSSmvbkvI4/O1EqM4QGgkKyU/iFZrZRnwE6m3Y
JnK7JvrIYFlmEjD3OAmggVqwy3pgvmSGqLgpjAp1HuX+FBu8SjHz9LrXt/R+IaKD
U/tAgl87TF8v47obduNwH82qgivkeiutsj1zWpoG8u7J0uUatgKk++yDFDipaRv3
1Bu+xUOwul1cwH+lNqpL0MAQcbDxl4DER68a8+Ue4SSHYev81wHOWX0h1+3hFt6R
ZgzsoScfLKigbpgYRTNoBpVLoHa1r4C3pO+AGJUPskbjfWJmF+3Q2KyuQML6ePF5
7IkCNgQYAQgAIBYhBFVfLktvh/kaQRBmnpBzSp5hnIpsBQJbEShYAhsMAAoJEJBz
Sp5hnIpsEBAP/ibdX4vrMOrQ2KqY/w5EBXV9Jeqe6rn3thUhnXi+ueJ3JPSykPv4
3p54u+vk7dt8EC1GlxuWSsokujyBCnSpl+Dc2D19nb7Z7XXTV8bApgR3KRM/l1qZ
f5jGDi5bk4+oQ76sfZYAOkCCdrAApEDWROUpu3soRYvrwkP57KqrFFrAH5TFgIRD
fLy1+LOxZkpnf6a6UMaXS3hO2dG2WVLEZnRhrMU/PRXBTihZHRsr49rgS8zjaDaA
RqlOQNFiBJ7Oe3ylu3qPrPmbTZFxA3fTBJzit4mmiluVJhlBRhBkGzKmxLiHSk/h
wPu6tdX8o0ndD9mTZliU5TNxhbTCiJk3AMGstsMUoe2X3XC3+weIzjuqf/vx2t3i
+mq6JUD4kJkJLt5amGQO54WlTOt4UYaqHvwlW4siJ6dJNpZUnuUgx+DnTkzg4OIJ
vDmUhrfAzdN/uVEI0OkzikJjMwJ3010sQsFFrIDSrGNf+aMDHMQJ8K6L5LHsVZqc
yZ3iym0z2HT2XhWJjRhRgWr3Mh6MMfA1cQo2vd8eJs5XXtmA6E4hwMp2PqdO9vgy
vZSBniPtot3X4sAqL+5bEf+9bf391SYCHveGAu/5mccNoOQMovITPaR8fTY1qNWl
FQj6AfiCzwnghQQTfjE8no3DU2+gUT3OBStiYUHxkdWTkFZf4uPvOl+juQINBFsR
KHABEADPLhGCfEcrv1Ca8MdL+iQKqqgtbZHx7fOMsz8iIXr7m5GastV6G9YOXdCI
AwrLllYbD9gC6PNGtfOSb5YbBFKiazmoMueNIs9w3plET132CeaNblIQeAa68S3s
2X5g1MWzL5qMPvvLZW1JY6JOnlU8JcLyk5STVV/2ILiaxePLsyyA4Gp6Elc3V/k0
paXGSYAg8lpea2J2bGFA1Hy9Y0tIybV/ZwBn8UnzqVOz0AQC050X+4MZA5ATzsuW
cFlFHeAjncd5hxkAALnv8hAB5MlPxUTCx0owvdXBQ2Hb3FUWRVbvSJCH+DCE2BAB
kNz0aqIYaRor0KY9yKBCiTorWw0+NTnB1WZhF7G3zEy2wJORtE+cc8OKjda/jl9L
HIYp5R8KH6FT3qrWNr00HPpaPzmD17IzafmkU8afCSK4O+dMJ4v/yvMTYMllMb16
y2/xSE984K2jEM/TIa1AZ79kqQNFEsvvrgNnN24Vrk545h4Lky8vC1IgWU9ydrXc
vnGZLsahomefBH2gzNzZ9MIwQSdDNTTM9T7Ba3PcbcjJMCcNx6w4iglMPU2S58EG
zl4R6/zFM/Jtbxudch0Q2R3tFhyQGgNbuvSFbrBHziw1ET8ZRH3oR/eJXcrPbMk/
XkWcY9cDiPJz6LImpjbH2XLR5v7eCLquAljLIvQJ9b9eTWPX5wARAQABiQI2BBgB
CAAgFiEEVV8uS2+H+RpBEGaekHNKnmGcimwFAlsRKHACGyAACgkQkHNKnmGcimzN
IxAAr9gXHSe3OE4ph8fhAZzqC04blNfnIp/x349MTF/O6Dghav+74Dsn/wiS1GeC
9HjpwfhMODoj1LzORBmh9iE0bmhMkoVi4dXi6xI1+IIm14Or90ceYYaELthjgc71
c6/12Ysjo21AUz3zMyi+IGaTv0Hlm4r8i29YWaaahQtHQgQ3R5l7xbktIu6yecWj
VGjLGuWHvQPC0XXgPh7d6Sl7JiPCI6Nc6uC3nXzdH7BjYiN2bwjiQykFmQ3kAfP+
cmAwN5s43P1SWW1JcPx+gf6+/eZFyezZ7KKQYB0ga+N7bgStxIjnr0hAc1gc9inw
nEVSx/mkKq03PdKKyinu/TEOGxPUo35Xnrmuyhj17wTDfaX7GxHqj8ajJSfSjEmI
HPQyWoAIC9aHro1D1rWAuzZyi7X7ff/aUry4xgneWYsIdZZ12V9/7icpRuqzMGNX
yBzoCeFCGHjmjcTadaAGZg8N5c1d9t4v1jNPE6T3+nJm2rLtz2jLfgU1ERcR+m1P
KZBlfI5b0ytkJBzaeQh1o7w6til85a1ONUeM2Dp2niItKbi58r1bm7yUNM3+oJGO
Kx+kK0SJLYp0Co8dtTpU4xAjbKNUVdUAR5OD8LXVEgurJrgyG9RT1dIVPn6Wv2cb
x60HnXJUEeVzqw+MHwdkuDaS+w5UgezjYZRU0m523WQqBY+5Ag0EXQqNbAEQAMKp
91oXeBAUXyVMTaMG7bMQW8iJWp9C2/ThNCVIlmHmxVLRv+snMFdGRPOYVEh680Zv
aYL1aWgMmEjw+D7FbQtlpc3efGMaX2jkDXdZZLH3fSGno304boTSfyGbV9TdP1Bi
C0nG/1vGvAomQ7qDjUO9nQhBFug5s2Qdrn5HDPXWkp0Mkw/i76tsTDrEUfMkcCjn
wUF5+2sjM5ael6QI3b/rCa4/D3Af2IcKqz89egTnVo6uZw5vIfljtS1+q4QoFa5l
nVu6A4irUzwJqxpMudo9MdAxZXLquhysw1SBOWdBu5Hhz9+AHOwy4Nk37BU+6J/+
PFdXjwDGyrwDCh1Kfe9VCRBh4vUYXSn3HDb3mPI9vQlxY4goGGKjypogwxIY+fWA
vG79Xh1qsrm8F6Ts8z+wB4YLQzXfF8/dmmh3t3iSHYNtHMrfLHrV5hhqk2otNOEC
q5UiaSu7to4ptkRenynZMSpKXSQy6t2c/n/7xjQHD9S1w/XtCKDo5XtM8+/PsC5A
zepYh0IR3hnB4K1Na//XOQssg3TZezcdLR1zwLmiSk2AckJQUX0P6wfBNqTneN6X
Tx5veQMoMvL2r6mhivPJt9Cq7HgOmEu42Pi2KTqIpBzAPP6NuIGXMOufN//Lqkbk
IByUNynALrbyM4a+wUOXteygNkJiwujINFfTCVLxABEBAAGJBHIEGAEIACYCGwIW
IQRVXy5Lb4f5GkEQZp6Qc0qeYZyKbAUCXuyYfAUJA8M+kAJAwXQgBBkBCAAdFiEE
U1yozzD8YhxLtnIc+XKu6iiH1UcFAl0KjWwACgkQ+XKu6iiH1UcpSg/7BYnRPcst
nCeuTd7OChs3uEF9A8hiu9NnulbGb8Xdpj19BYusv8yTSiUl8sxb9dyVuAo2UkTB
7G0FuVrGbF3+4NlIHSv9bigA13AcTfWZc2BJnfUVeQD7FOlUfHmBpyDRlrYXlyr2
cPJBjNkps2z2/jP0PC1aOyVkbcIQzYYbjrn0NvmwWe7ez/B5JZdRD51V35Jpj07B
ghFL59vireVuvUpl1URCMgJqTIMAl6yns2+6HDEzmwAQUa4fkldb5bvIWoKh2Zeb
2oWUGDEcavVvqjVVgg19n+NOMA97ohmT4FLo0eF0nvvpCR/hUbl0GHgcmJZuZ7oi
rhZ3q9+gYAUBdLw441M0JzjFuu/nCRSi0g4YZklGMvSHWICZIa26pvTgEuU1ye3j
gbrlVbBtONSLhyw/DwVFUzV3WSaWtE85EyGduqyFe/Ft9Nh1fKbcFcHLNoHrLo66
WafJm8E755bzSXqoAp4L4vLTFRIClkZPTGvy/kUW4EbfBRctlYlJWdl/LavxSO32
zgkkZ6uN3dE6Qg3CPxyLqW/IaPahfB2pRnQcTWlFlRmQTqYx7/LMLLxABSO5gUWj
++rlg74CiLoJCUpWSRocwLrkgqBb4Fem24E8cAgs/jsC4eqRIzyfNqtHdNfsOf2Z
GKyGjTRBMEDhLNEobr204dNCQSTv44yfG9QJEJBzSp5hnIpstucP/0xiQvVTAhiZ
QifMnZJSUgVGfejR8fMha1JO7WQkPj4mX9rHcZod3Z3UchwrayBQ0PW/H+RLZ4hu
Hz/2qznCQYnxWmu6evZz10irBgZC4GJexEeqEbO4tnV9LfkwHmX5rI/YZ0Y+JpUr
aXmqcpv6ohqb0Ex1F/lCeEi5W6W5HV8jJbaAH+5ykF7TVh0Jyu6MOKlG+oaiwemq
mmuU03DG7nX/0KZb4/txyEmQmmQ6+OSxDvcR+lh/HXQZIseC0b2N6/lVkEvuVmLV
TjD37I3AniXYf54SNbzD7MsoNigTPLgfjSCSju4/VVoIRzxcVSaFXpDWw365Slbj
o6dlFTRTVYzHTOh72VviKwnwjqOHD+FOwlCiQptOGAplZtHB51FUPX6AzQMgHrAS
8vii9T3fX0rEo+62fOmk/s6HW08bj+d7WyS2NkXsLin69bZ2RIbd7xKT4L1W9ZQr
CO/SEGfjbi2y5Gqr88o469hb5WHT4uF0SI+B1t9D7K5cqPmwm8/1QhIBwGsl2IER
IMY3jQqvowbHim/p/CLWA/zJB81HeMOOwOVzEOdL85HjuIpNW+h7wR00iStB2StQ
1nJsbKN0FcaYaSYBdJJralPxFc2BzTKJ+O8uiP1XRxG/glDscgOS1qeOLdZDV61w
WGDRECcv0i02ItNN8G4KeHq6B2EjFHO/uQINBF0KjfwBEAC4Surm8ZK0QGv835++
ppf9zwcHPt0Majn+mopm6wD0OiQJQhUUgfLRYjTz9zTTCuAniKVO0R8frncwzy2X
iKNAfhzjhRN4CQA9ECJQypyrOGaSqMAiasZqNmJ2O8J2nmyykva4AJNhWQeaYhAG
WL6T/icZiTxocMcaZd33qyeRDk3QwwCjlqG/C61zyxDoUyx7fPnoNzqnTow2l0QZ
p2A/9YXmtgnBOWD3ASgnSbS+4q/gzOdkC0KnVAGhzXk86ql9Ch2Rr2UQVmGIAMiv
jvHanl0H8KJlDrIbm7G9YB/sfqfLMy2pDJ7pATvNKlvb+TXlpAbljMKzj0KrEjxH
A5wy6b3CqAB6P2nJLRhn0iEAhHsfJy+To5w72y/ms/jnwaQaRZHv0b6fRGZoJX/x
2FuWKMxf0ocHJwntZsvc9eeY9wxQopdPztS5WeHAdgLJzhPrmo56HXPFQVsQE1s6
XOepGwgtQppnoKvUnBR97OESRryr4vh98J5iCDWcBDpf9aZCgwTheDG4Dn9Q7387
Z0DZFVyj8cTJtOg3FwYrtTKkQj23shl4tUJ9XR4vkJSjjlD9Ks9LWuJsIGzvKIOy
1pG23jxHos1IRRB2G7zxzELHiWm4ZSELUj8+sl59DckGDemta3KUDEJ6tbjAODSX
719cuvIHGhFjKGzm5tE3OXBYmwARAQABiQI8BBgBCAAmAhsMFiEEVV8uS2+H+RpB
EGaekHNKnmGcimwFAl7smIEFCQPDPgAACgkQkHNKnmGcimyJRRAAvaP9+bP0GTT0
/cuYxwrmxInW7L4maHE77okXOtTYQ1qwwdWJv03/y7nEpofxT6VRGdIW1FwG1UHU
upeIy6DqNyog7pCJsqOIW7zGpdKDJV01TxdA2Kk/7FByviBXdH3BLjKso1n2nlQf
hGj5P8SYHXKw0pBh96AZhJDWiddn68RMf+W6BqGudA9DbvM9ZxSf82AJh1m3b1JO
7ckWrc2kMrr/JXxxxQnL20y4WoV63sHmZj91bQcsbOLnM+wJswzvdb6B8kC7/+EH
1q73O5Z4RzvjXt1Rpyy6e+VOKTZ3/sBUS08JSEK7WSVSV1pLXUwtreV9cW6PE+UL
ntMluefVrfe1XFFfHnJiwC0OlNjRIrzNTMJoX61OdH22q1vz3+4WbisaXXs/o5L0
t0yCPLqMiYyd4A2vwSpKViZ+ngyCaHlugJEBFBLFYRjHbtp5klpaJoQ4gxCmKjnT
sYI1DGzPxelvomgkL6lCnKXhvMZ+Xap3gmSneOOIp8dM6/wgP4ndpfot6JOTtPpA
8+sFIZ4eQCJq3J8se/wtAuBYsCs05lvlVUirzw5d4n3r34ZhY2+Ektp3Q8aT1bVz
Fv7QQ2bGojSwWEpsQwZOw58OTScUix/sq//gYsYC2T2VOMMN3QsFvUsM325SIcxL
8SIRNWnLH5nHSmqnEwV7RdUaJD6nH6W5Ag0EXQqOKQEQALDlTUDZmzYAd0ajDUKD
0nVDRSJ9AFmISrdnDF5QADYnvV/1OC2PLUzCpy+xhoPYnizz5kADZTHuh/oRXGyE
kHuplTuG6XAyt7tzrdZaoghEDBz5acwNhVh8aTKPmk0pHSASABbKkUiM6sC1UfrC
x/gtWHJ9ETZpdMdf03kokgjiOkOSwFS0zA31zO6yX+eZRILQvWlKCNPP3q2OC1RL
6It9/lLPIblJEhuR5HR1oPrsPK3cI7bYnLD/eoBAfSAPHY5peXdkrbr0+2B0bMSC
22OkZfTjTDTO2P7FIKCc9lAomthj1Hz9RSuDuqP9LFQuHa2e8l8vCaA8aJMGnSD+
1bMunAqg1LxzarTFtxVjlLdUN//66o0IloPVh0XfCdJzPucI1Hv9cmWS7jcLkakh
Sy9jFWbpZ84AGmVF3VDtOxuLXr4MBKTov76+CYVVCoyeidWbWZNnre1FbbaKf8Ti
WmkRbhFF+MC/BHFiUnNhch86NwqUScYKpip02baPUhMo2hTwNfEgxe0FFHqCn428
S/kwyqm1hC2FCc70QTv66FmWcD8KU4ELGiOkL5qLhlLZtCQ4CMgAnGVBw9N9h+Cc
YlObMg4x+wSRu7vFrAJTfkiQtGZ0nQie95+DGiDxUXqkoFwe+XsOaXm0KdKoiSv5
rNzYr2ul8EyVbtA08cZY78y7ABEBAAGJAjwEGAEIACYCGyAWIQRVXy5Lb4f5GkEQ
Zp6Qc0qeYZyKbAUCXuyYgwUJA8M90wAKCRCQc0qeYZyKbH/bD/9htvUhzpfsutbh
rpQDOCC5j7dabpFGbUDSZSmE8fpZnFf6KMSc3dUCSBBF81rU6mzwc7xbdB1bn70O
U/dpJD8ZPPd1BDVVXqVTbLiCM2IMQmDU468q59fCmHREZeDXVtVCeP3XOI/lCnoI
g8Gl8prv0wXLMEdzb2w5h57l99dSmnXo111bZlmALe4Ulo5vzJH3tHDAqxPy+nrL
9bLDL6uTj/G5RRPMXjaUxUvwS+mX7Gx2FGw9bNfuMnmVVUuGc4+jtsriHlVTDuFy
Ips3dUh+Kn4ytpn0V3a+zlD9zh+MlE6yJLEkgkfd6wkKZ2kfc1gmmNaB7wZHGW1q
uxlcjgCLXwHOtH8Xr0tm/dChTMaquoLoDX6MZ2mptYSGgc+4TYvLZwLzT0bI+D/n
dyMaSFwvd/Aa/SQeAMnptpL0kBwa/N67SJhDeZ3iQx7uiAElzQvtG0a0W4gWAyfd
deDb0IqLgrhIxOiWCAPjpvjFJ7KfQXBuCTlukVw3MQenIgRkicdXIVpS6xDSUmSV
Ut7757rKjU20gVeRqolyN8s8k9WCEz8Snt+3920xnrlzJexUtg7Qwac8bcnwSvmj
dX7EJA8OedSRTq9YWfdZxdI9e8T4R4dLIbw6IWF3vPFI6PDTKMfx8gH2CYf/zlYg
Tp4Yu2VfWxHtlc8fTREzXd3Jqxq48Q==
=2/kN
-----END PGP PUBLIC KEY BLOCK-----

@ -0,0 +1,33 @@
Title: Jolly Christmas Decoration
Category: Blog
Date: 2015-09-17 15:30
Tags: /dev/diary, hardware
Christmas is getting closer (not really but let's just roll with it) and I wanted to learn [KiCad](www.kicad-pcb.com) a software that let's you create circuits and design PCB for manufacture.
I found a tutorial series online by a guy named [Ashley Mills](https://www.youtube.com/channel/UCaBNA-lmg35Wfx2eh2oDkWg) (with quite a legendary beard) who showed off a simple circuit using a 555-timer, a shift register and an XOR gate made from NPN transistors and resistors to display and repeat a pattern on several LED's.
The series focused on getting to know KiCad and all it's features. And while I did that in the first revision of my board, I've diverged from it since. I can however recommend his videos on KiCad to anyone who wants to dive into PCB design, has no clue about the software and could use a little chuckle while also learning some really awesome software (youtube channel link above).
# My Christmas Bauble
So this is what I've got.
![Kookies Christmas Bauble](/images/christmas_bauble_pcb.png "Kookies Christmas Bauble")
As you can see it's a round PCB with simple 5mm LED's around the edges. It no longer uses NPN transistors but rather a single SMD XOR gate. Much easier to wire up, cheaper and less prone to errors as well.
In general I've switched the entire design over to primarily use SMD components as they're smaller and more elegant. And it theoretically allowed me to get the footprint of the board down to something that isn't too excruciatingly expensive to produce.
It took me two more revisions to get the board to a state where it's not too complex and actually fit on a single layer (!) with no vias except for the holes for the LED's obviously.
It uses a round cell battery on the back of the board to hide it away and has a hole at the top to actually hang off a christmas tree. Theoretically the battery should lasta few days, so maybe have a few spare ones around in the christmas season.
# What now?
I haven't manufactured this yet. I am still thinking about refining the design slightly. I have the **entire** back to work with and add things. I was thinking about adding a simple bluetooth chip so that patterns could be pushed to the device via an android app. But that's the future. For now it should actually be functional and maybe I'll order some `Revision 3` boards just to see that everything worked.
Here is a dynamic render from KiCad as well.
![Kookies Christmas Bauble Rendered](/images/christmas_bauble_render.png "Kookies Christmas Bauble Rendered")
And be sure to checkout my Github repo for the project if you want the KiCad files. Either to play around with them or to manufacture some baubles yourself. If you do, I'd be interested in pictures of the decorations on your christmas trees so I can add them to this article as a slideshow 😊

@ -0,0 +1,38 @@
Title: [Update] Jolly Christmas Decoration
Category: Blog
Date: 2015-11-27 15:30
Tags: /dev/diary, hardware
You might remember I played around with Kicad a few months ago and made this [tacky little thing](/hardware/jolly-christmas-decoration/). Just about 2 1/2 weeks ago I went onto [DirtyPCB](http://dirtypcbs.com/) to get them actually made. I wanted to have gone through the production process and get something built before I started doing more complicated projects.
Unfortunately I discovered a little mistake with the design in the layout that ended up at the manufacturer (Rev 3.1). I tried to fix them but Rev 3.2 didn't make it in time, which means my boards will be a bit more complicated to power. However not too complicated as the power-in are just throughholes so I can actually strap anything behind it to power it.
But without further a due, here is the result from DirtyPCB (which I am actually quite impressed with).
![PCB with Banana for Scale](/images/christmas_bauble_pcb.jpg)
Now, I'm new to all of this so I started doing beep-tests on the pads to make sure things were properly connected and all the boards passed them. The production quality is pretty good. Unfortunately I can't start assembling them yet just because none of the parts I ordered for them have arrived yet. The manufacture and shipping of the boards actually beat the shipping of off-the-shelf parts!
Anyways, I'm kinda excited. First time making an electronics project. I might post another update on when the parts arive and post a few gifs of the finished products. If I don't I'll probably tweet about it though.
Now, I have another smaller electronics project in the making where I am, again, waiting for parts to arrive to do some testing. And already designing a modular PCB board. (Limited a bit with the 10x10cm limitations on DirtyPCB I need to design my project in a way that I can take a bunch of smaller panels and stick them together into a large one, which would take hundreds of dollars to make elsewhere).
[But realistically for the production quality I saw with these, I'd be happy to give them my money again for future projects. Especially at that price, just unbeatable.](https://www.youtube.com/watch?v=d36wUmJGzvA)
😊
Anyways, enough ramblings. Read you later.
# Update...update
Right...so after tinkering with the bauble a bit I found out a few things. The most important one being that I made some mistakes. Some big ones :)
- Pin 9 of the shift register was connected to both input A and input B of the XOR gate. Which meant that both inputs were always the same...which also meant that the output was always 0.
- The 555-timer clock ran at several hundred kilohertz. I had to change the capacitor down to ~12µF and the resistors to ~4.7 ohms.
- The coin-cell battery didn't have enough juice to run it. Two had to be put in parallel. Even then, two batteries would not be able to run for very long.
To make the bauble work I bridget the xor gate completely, so just feeding back the shift register end to the beginning.
In addition to those things some of the LED's sometimes didn't work. I'm not sure if that is due to broken shift registers, traces or LEDs. All in all I do consider it to have tought me quite a lot about electronics, going through the process of producing a PCB and debugging electronics once it arrived and inevidably goes wrong :)
I am currently in the process of redesigning the entire circuite from scratch. And making it easier to solder. I want to make it into a beginner soldering kit that people can both learn how to solder with and also have something to hang off their christmas tree in the jolly season.

@ -0,0 +1,49 @@
Title: Open Plantbot – Rev A
Category: Blog
Date: 2016-03-16 12:08
Tags: /dev/diary, hardware
Spring is coming in Berlin and thus my thoughts – as every year – are with plants and growing them. I live in an appartment with a tiny tiny balcony so I don't have much space but that has never stopped me from wanting to cram as many plants into the space as possible to the point of starting nuclear fusion.
In addition to that I have a few house-plants and very water-sensitive trees in my appartment. My current approach is to go around with a jug of water every couple of days and water them individually – making sure the soil has a certain moisture and doesn't exceed a certain limit – but I've always had the dream of being able to automate away as much as possible. That's where the idea of `Plantb0t` started. And I want to tell you a little bit about it.
The basic idea is to have a little controller in each plant-pot that measure the moisture of the soil and reports that back to me via an ESP-12 SOM (System on a Module). The ESP has WiFi capabilities and would log to an MQTT server on my home media server. This way (when I'm at home – none of that IoT shit) I can see how my plans are doing.
# Current state
So that's what Revision A of Plantb0t is. I also added a second sensor slot which is meant to be populated by a temperature sensor but could theoretically house a second moisture sensor. In the end the probes are only sticks in the ground that have a resistance between them.
Here is a dynamic render of the board (that went into prototype production on the 29th of march, 2016).
![Plantb0t Rev A](/images/plantb0t_RevA_front.png)
As you can see it's powered by an ESP-12 and comes with it's own programmer (The lovely CP2102) and micro-USB header. The USB-Port is currently the only way to power the board.
In the future it is planned to bypass the USB power and only use it for the programmer and otherwise drive everything off an externla powerboard which provides 3.3V for the Plantb0t.
In the bottom you see two constant current sources that can power two analogue sensors that get multiplexed into the ADC of the ESP-12.
GPIO pin headers are included for external gismoz such as a pump to act on the moisture data as well as screwholes to mount the whole thing in a 3D printed case.
In total the board is only 5x5cm big!
# Future plans
A few things I want to realise with this project in the next coming weeks:
- Primarily the Rev A board needs to be tested to make sure that the programmer works
- Figure out a good way to calibrate the sensors. Maybe drive a button via GPIO?
- Design a power board that generates 3.3V for the board (but not the programmer!) from a solar panel and a battery to decouple the entire sensor-board from all power-sockets.
For the next revision of the board (Rev B) I want to include more sensor slots. Maybe work on the part spacing a bit and increase footprint sizes. It should be easier to solder and someof the parts are ridiculously small. I mean...I have the whole back to work with?
I also have some crazy ideas for a "Plantb0t+" Version with even MOAR SENSORS (Including a pH-value sensor!). But that's all faaaaar in the future.
Either way...I'm excited for my boards to get here (hopefully in the next 7-8 days) as well as all the parts I need for the prototypes.
I leave you with a screenshot from KiCad where you get to see under the hood of the board. Cheers o/
![Plantb0t Rev A](/images/plantb0t_RevA_naked.png)
(The project has a [Github](https://github.com/spacekookie/open_plantb0t) repo where I will try to populate the wiki with as much info as possible)

@ -0,0 +1,25 @@
Title: I got accepted to GSoC 2016
Category: Blog
Date: 2016-04-27 18:47
Tags: /dev/diary, gsoc2016
![Acceptence Mail](/images/gsoc/00_acceptance.png "Acceptence Mail")
The title should be self explanatory about that one 😊
But let me go back a little bit. A couple of weeks ago I sat in the basement of my local hackerspace talking to a friend about crypto when somebody joined the conversation, asking if I was a student and if I might be interested in Google Summer of Code.
After I looked up the project and familiarised myself with what had to be done, I thought it would be interesting to try to apply. And so I did. I wrote a long-ish proposal of what I wanted to do, how I would do it and when exactly I would acomplish my goals. (You can read my original proposal [here](https://storage.googleapis.com/summerofcode-prod.appspot.com/gsoc/core_project/doc/1458924075_GSOCProposal-KatharinaSabel.pdf?Expires=1461863360&GoogleAccessId=summerofcode-prod%40appspot.gserviceaccount.com&Signature=h0y5Nzi7llFNWKzt9%2BLGLvxcAPZ%2FaO7ni1ZyRDA3uFi6PD%2BDBmtIB6RJAr4Ulhv6fe64IFyB%2FI9iuVIYWIInYTmN7pZ9aUxw6TgxgFYguIywfcE2yUZ4o5UKb0PUbwI0Pu7o6mq%2BzSDXqlegpVOgujQ9k2QuTg1T1CqGzSi%2FnC4u6H0mB%2BxzWGGpoBC6rFwkKM1S70gE7hJ0EZpgYWr9H9zKPcwrfPtx99zqb488sH6STGYJf4tFrDRnnr57k2zbSN%2BO17chZtVBjGUYrKoyU6B%2FGB8MexFE6rmYaTCr5AjgqGWm97VCCwZkpHbRiTtFH5yT825G9%2FkRPYHkxsPnCw%3D%3D))
In the meantime I actually had a sit-down with my mentor (the person joining the conversation in that basement) and made further plans how to implement things.
And so this is it. The next month or so I will have time to get to know the code base of the project (although I partially already have), meet more people from the community and generally get into the rythm of what GSoC is.
I will be posting three blog posts on the official [Freifunk Blog](http://blog.freifunk.net/), one in a couple of days/ weeks, one in the height of the project and one that will go into the aftermath of the project.
But in the meantime I will be keeping my blog up to date about what I am doing, how things are going, my challenges and things I learn.
In the hopes that people might find it useful and lean things from it. Or just to save my insane ramblings in some narcissistic pleasure...to think that I am relevant in the world 😜
Read you soon,
Kate

@ -0,0 +1,37 @@
Title: First steps...baby steps
Category: Blog
Date: 2016-06-02 19:56
Tags: /dev/diary, gsoc2016
So it's been almost two months, the community bonding period has passed, blog posts were written, talks held and slowly but surely I'm working myself into the qaul.net codebase.
It's always weird joining a larger project and seeing established build setups, code conventions or generally things where your first thought is "I would have done that differently...". But it's really fun.
I'm currently working myself into [mbed.tls](https://tls.mbed.org/) which is the crypto library which was chosen to power the cryptographic backend for libqaul (which powers qaul.net).
That includes some code that will probably not make it into a later version of my branch: the debugger.
# The De-bugger?!
![Debugger Pro 2016](/images/gsoc/01_debugger.png "Debugger")
Well...debuger might be a bit of a strong word, it's basically a way to develop core functions of qaul.net without having to start a GUI, going through NetworkManager dialup or oslr bootup.
There I am currently busy writing a wrapper around a new namespace added to libqaul: `qcry` (short for qaul crypto) and properly integrate all the mbed.tls sources into the library so they can be accessed by libqaul. The idea being that I don't have to leave vim and the terminal to develop on the core cryptographic components such as:
- Key generation
- Identify generation (with private key fingerprints)
- Identity verification
- ???
Only in the last step of the last bulletin do I actually have to involve the GUI of qaul.net. And until that point I wish to not come in contact with it (if avoidable).
So most of next week will be getting to know mbed-tls as I have never worked with it before. But hey...can't be worse than the gcrypt documentation¹ 😂
Hope to read you soon with more updates (probably rants).
Kate o/
---
¹I am sure I will eat my words in 4 weeks

@ -0,0 +1,69 @@
Title: What I have done in GSoC 2016
Category: Blog
Date: 2016-08-19 18:13
Tags: /dev/diary, gsoc2016
Google Summer of Code is coming to an end. And as the final bugs are getting squashed and more code is being prepared for the big merge, I am sitting here, trying to think of how to represent my work.
I thought I would write up a little blog post, explaining what I've done and what still remains to be done.
# The TLDR
My main contributions are all available [here](https://github.com/spacekookie/qaul.net/commits/qaul_crypto?author=spacekookie) (spacekookie/qaul.net on the `qaul_crypto` branch). I did a lot of small commits. Most of my code can be found in this [sub-directory](https://github.com/spacekookie/qaul.net/tree/qaul_crypto/src/libqaul/crypto).
In addition to that I ported an existing project (from python) to C to be relevant for future front-end endevours of the client. It's called [librobohash](https://github.com/spacekookie/librobohash). I didn't end up finishing the port because there were more pressing issues in qaul.net and the UI was delayed.
While most of my work has been in hidden backend systems there is a demo you can run. The source compiles and has been tested under Linux (Ubuntu 16.04 and Fedora 24) and is located under the `src/client/dbg/` directory. The demo creates two new users (to simulate communication between two nodes), adds the public keys to the keystore and then continues to sign and verify messages. If the demo returns lots of "0" and "OK" it went okay 😊
Feel free to play with the demo; for example, switch out `message` for `fakemessage` during verification 😊 The source for the demo can be found under `src/libqaul/qcry_wrapper.c`
# The good (aka what I have done)
<img class="dual" src="/images/gsoc/02_cryptoui.png" align="left">
The two main components that I've written during GSoC2016 are internally referenced as `qcry_arbit` and `qcry_context`. They are two modules that make up the new crypto module in qaul.net.
As I explained in my first blog post on the [Freifunk blog](http://blog.freifunk.net/2016/gsoc2016-wrapping-crypto-module-qaulnet) the Arbiter provides a static API for the rest of the library (libqaul) to interact with the crypto module.
The context holds the actual magic of holding user keys, signing and verifying messages and (theoretically) encrypting messages as well.
Possible with this API at this time is to create users, to sign messages with a users private key and to verify messages that are sent to you from other users. Originally it was planned to split the arbiter into the actual API and a dispatcher which would allow for concurrent access to the inner functions. However it was established through tests that the design was overkill and was thus scrapped.
A keystore was added in addition to the user store already existing in qaul.net to provide an easy way to store public keys (mapped against fingerprints) that are received from flood events on the network.
In total the crypto submodule adds another ~2.2k lines of code to the project.
# The bad (aka what I haven't yet done)
So far completely un-implemented is encryption. Unfortunately working with the crypto library selected for the task turned out to be more challenging than expected. With almost no documentation and a few very niche examples I basically went through the library line-by-line to understand how it worked.
As such, my focus was set on signature exchanges at first because the verifiability of messages and the change to address users by their fingerprints was deemed more important.
My contributions to qaul.net won't end with the end of Summer of Code. The function stubs are already provided and I plan on implementing the encryption features in the coming weeks.
### The ugly (aka what I can't do yet)
Signatures (and also encryption) of private messages (so messages that aren't flooded to everybody) is currently impossible. This is due to the way that the communication system in qaul.net works.
I have talked to my mentor and he said that they were currently in the process of re-writing the communication sub-system in libqaul. This means two things:
1. I need to wait for those changes to be done until I can finish what I set out to do
2. Some of the code I wrote (hooking into the current communication system) is being made obsolete 😞
# In conclusion
What I can say is this: qaul.net has gotten a very big step closer to becoming a more secure network of communication. The crypto submodule is tested and easy to use. What might happen is that parts of the code get merged (the crypto submodule itself) without merging any of the code that hooks into the communication stack.
I had a lot of fun working on this project and I am looking forward to more contributions. I have a few cool ideas that I want to discuss with the rest of the team and I am glad that I participated in the Google Summer of Code.
I was interested in open source before and I contributed to my own projects on github. But the experience I gained this summer will be helpful for me, not just for my own work, but to be less reluctant to join other developer communities.
And I look forward to seeing my code get merged into qaul.net 😊
Read you soon,
~Kate

@ -0,0 +1,20 @@
Title: Chaos Communication Camp 2015
Category: Blog
Date: 2015-08-25 15:30
Tags: /dev/diary, ccc, c3
Hey everybody, long time no read.
As I returned from vacation on the Chaos Communication Camp 2015 (Not sure if I'll post more about that) and probably starting a new job next week (*pssst* not sure if I should talk about it 😉 ) the rest of my summer is still ahead of me and I'm booming with ideas and inspiration to do stuff.
I've started more intensively coding on the `newdawn` branch of Reedb, the C port of the database and planning some features for the old codebase via the `backports` branch. Because the new codebase will use a different crypto backend (from OpenSSL to gnu_crypt) a migration agent will be neccesary to migrate between 0.11.x to 0.12+ vaults. But as very few people currently use Reedb and most setups are for testing purposes only that isn't a very big priority right now. Depends on how the current version of reedb develops :)
But that's talk for another day. What else has been going on? After the Chaos Communication Camp 2015 I've been playing around a bit with my rad1o badge.
![Rad1o Badge](/images/rad1o_badge.png "Rad1o Badge")
But not much has resulted from that yet. The distribution I'm using (Fedora 22) at this time unfortunately has a broken arm-gcc package which means that a linker for embedded systems isn't working properly. So hacking on that will have to wait a little bit. But I will very likely post more stuff about that in the future.
Until another day,
Kate

@ -0,0 +1,59 @@
Title: Recovering a destroyed LUKs container
Category: Blog
Date: 2015-11-19 11:41
Tags: /dev/diary, data recovery, linux
So...funny thing happened to me the other day. And by funny I mean not funny. Actually I mean quite the oposite of funny. I booted my laptop after shutting it shut down for the first time after several weeks of activity and...nothing.
I stared at my plymouth boot screen while nothing prompted me to type in my passphrase to decrypt my harddrive and the first thought through my mind was:
> Fuck...I don't have a backup.
# How to debug
Now...not to worry, after some time I was dropped into a recovery console where I could ask very simple questions like what kernel modules were present and what Systemd had been up to. And at first I thought the problem was clear: `Module failed to load: vboxdrv` and other messages populated my screen – all about VirtualBox kernel modules.
So the problem was clear. I had fucked up something when installing a new kernel or VirtualBox or anything else. So I blacklisted the modules and moved on...just...that it didn't. The problem persisted. Thinking that I had fucked something up when dealing with the GRUB config or the GRUB recovery console I got my trusty Fedora 22 live-USB out and booted off that.
# How not to panic
Looking at the partitioning on the disk I realised that my 256GB SSD was only 500MB full (which was rightfully detected as an `ext4` formatted volume. The rest of my drive was marked as `unpartitioned space`. 😱
Now...here is where things get and got interesting. But first let's have a look at my setup.
```
sda (the actual drive)
├── sda1 (ext4, mounted as /boot, contains my kernel)
└── sda2 (LUKS Encrypted Volume, contains subvolumes)
├── vc-root (RootFS)
├── vc-home (HomeFS)
└── vc-swap (guess c:)
```
So as you can see my boot drive is outside the LUKS container and unencrypted which was why I even got the chance to enter a recovery console. The rest of my system is encrypted. And seeing that only sda1 was being picked up it meant that the partition table on my disk must have had been destroyed to the point that it no longer knew sda2.
Knowing this didn't help very much though and it took me a few hours to fix this.
# Restoring the Partition Table
So the main problem was that my partition table was broken. I don't want to start speculating as to why this happened. Maybe my SSD just lost a few blocks, maybe it was bombarded by solar radiation or maybe (just maybe) I was obducted by aliens in the night, refused to give out my master passphrase in my sleep and because of frustration of not being able to get to my data they deleted some junks from my partition table just to spite me.
Either way, a combination of two applications saved my life and hopefully will save yours.
`testdisk` and `cfdisk`
At first, make sure you have backups ;) And don't blame me if you fuck it up. Also you need to know EXACTLY what your layout is to restore this. Otherwise BAD THINGS WILL HAPPEN *waves hand around warning-ly*
Run `testdisk` on your drive, enter through the screens, let it do a deep search and just say yes to everything it wants to do. This restored the LUKS header for me again at which point my computer at least started seeing the encryption container again. Didn't mean I could log in because keyfiles couldn't be found (they're not in the header apparently).
After that, I ran `cfdisk`. What this program does (or can do) is rebuild your partition table. After letting testdisk have it's go it found my LUKS header and completely destroyed my ext4 bootpartition. So in my case this is what it looked like.
![cfdisk before it saved us](/images/cf_disk1.png "cfdisk before")
What you will want to do is hit NEW, select the correct size of your partitions. Depending on how running testdisk went for you it might have found different parititions, all of them or none. Please! For the love of god, make sure you get your sectors right. Becase if you don't it will seriously damage your system and might make it completely unusable.
In my caseit was easy, I filled in my boot partition, marked it as bootable and set it's type correctly, also fixed the type error where before sda2 was being picked up as an LVM and not a LUKS container (this is obviously from my running system). And this is what I ended up with.
![ ](/images/cf_disk.png "cfdisk after")
Make sure you write your changes, exit and reboot. And if you did everything right, you will have a working system again.
And that's that. I hope this article will be of use to someone at some point. And remember: make backups!
Cheers o/

@ -0,0 +1,27 @@
Title: Winter update
Category: Blog
Tags: /dev/diary, meta
Date: 2016-12-2 10:43
Howdy everybody!
As the year is winding down and we're all getting ready for the jump to take us out of what has (in my opinion) been a *very* shitty year, I looked at my blog and could only shake my head.
I had moved this over from Wordpress to Pelican and basically replicated all of the layouts to the extent that some of Pelicans own functionality had to be abused to make it work. But as I kept publishing things on here I realised that most of the features I had implemented went unused.
And so, for the last few days I have tweaked the layout (and design - as some might notice) to be a bit more traditional again.
I was also considering to change theme but after not finding anything I liked I decided to hack the fuck out of my current one instead. You can check out all of my horrible changes [here](https://github.com/spacekookie/nest).
I've also finally done some stuff that I've wanted to do for ages - such as pimping up the front page, adding a proper projects page and go through some of my old tutorial series, fix their formatting (yea right "perfect wordpress import...") and update them to newer API's of libraries. Some articles have just been dropped because I would have had to re-work their formatting and they were no longer relevant. Stuff will slowly be introduced again, with proper formatting 😊
## Everything else
In the terms of literally everything except my blog: I'm looking forward to the **33C3**. I'll be joining with the c-base assembly. My first congress in almost a decade! Expect maybe an update from that. And maybe there might be some christmas hacking. It's always more fun to do silly RGB LED stuff if it ends up annoying people on the tree!
Also, with the blog now in a bit better shape I will try to keep a closer journal of what I'm doing. But hey...no promises, right? 😉
I shall leave you with this piece of relaxing GIF.
<img class="original" src="http://i.imgur.com/KZquOZM.gif" />

@ -0,0 +1,71 @@
Title: Post 33C3, what next?
Category: Blog
Tags: /dev/diary, c3, ccc
Date: 2017-01-03 12:28
Howdy everybody,
I just came back from the annual hacker conference in Hamburg, Germany known as the "Chaos Communication Congress" (or CCC for short). It was the first time I was there for the entire venue and the first time I was able to go at all since *2008*. So yay!
It was a lot of fun and I have a lot of nice memories to hold onto now. I talked to a lot of interesting people, learned new things, got inspired to do new things and continue on old things.
More importantly, I loved the chance to get in touch with some other women in the tech industry (via Haecksen & Queer Feminist Geeks), talk about problems, attempt to come up with solutions and just generally rant about things :)
I also found out that I am in no way, shape or form a dancing person. Although electronic club music is fun!
# Some talks I went to
Following is a non-comprehensive list of the talks I went to. I am filling this from memory, so some talks might have been missed or dropped. And maybe I'll just edit them in later without anyone ever knowing.
## [How Do I Crack Satellite and Cable Pay TV?](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/8127.html)
A really quite epic lecture about using glitching to extract keys from a very dated security layout. Not that anyone should do this (it's not worth doing it anyways...never anything good on) but it will teach you a lot of stuff about hardware security
## [Bootstraping a slightly more secure laptop](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/8314.html)
A talk about the flip-side of TAILS which aims to introduce trusted computing into a world where the machine can't be trusted. HEADS on the other hand uses coreboot and cleverness to create a verifiable machine environment to build an OS on top of. Made me want to get an old thinkpad on ebay to play with 😊
## [The Nibbletronic](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/7925.html)
A relatively short talk about the creation of a musical instrument. Learning by doing and failing. Quite interesting for me as a hardware designer (as a hobbyist) but also a musician.
## [Shut Up and Take My Money!](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/7969.html)
If you have a bank account with N26...stop having a bank account with N26. Their security is absolutely horrible. And while, yes, all of these security issues have been fixed, it shows a rather lacking attitude towards security from their engineering team. Best demonstration of client-side security gone wrong. And why ReST APIs are fucking aweful!
## [Untrusting the CPU](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/8014.html)
This was a great talk given by a close friend of mine about one of his super crazy projects. The idea being to construct an FPGA powered PCI-E device for laptops and/ or desktop computers that intercepts messages to the display, encodes and decodes text into them to provide an interface for encrypted messages without using the CPU. It's really quite interesting and I can't wait to see what he does with it.
## [Making Technology Inclusive Through Papercraft and Sound](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/7975.html)
One of my favourite talks was about an engineering toy kit that was aiming to be more inclusive. The problem it attempts to tackle are the incredible low numbers of women in computer science and engineering (significantly lower than in other scientific fields). There are a lot of reasons why women aren't well represented in the fields and they are all cultural. This talk was about trying to change the culture around teaching people about electronics and code to be more inclusive towards groups of people (mostly girls/ women) who would otherwise be missed.
I really enjoyed the talk on a lot of different levels. One was the technical aspect of creating a childrens toy on the cheap that is inclusive and universally programmable through audio encoding. Quite worth a watch.
I don't think that just with projects like this the culture around women in tech will change. But it's a start. What we realistically need is a change in culture throughout all layers of society. I think the problems around women in tech are quite complicated. And unfortunately usually result in a bunch of assholes starting to shout either about how feminism is evil or how diversity isn't important. And biases aren't actually thaaaaat bad, right? 😝
I could rant here forever and it's questionable how many people would actually care 😅 I can recommend this talk. Let's leave it at that :)
## [The Moon and European Space Exploration](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/8406.html), [Interplanetary Colonization](https://fahrplan.events.ccc.de/congress/2016/Fahrplan/events/7942.html)
Those were just the first two talks from a series of space talks. The first one was from one of the heads of ESA about their plans to colonise the moon for profit! And science of course... It was quite funny and definately worth watching.
The second one I almost liked more, though mostly the first part of it. Liz George manages to explain incredibly well in a very short amount of time what challenges exist when discovering exo-planets. The second part (by somebody else) is a bit more vague about how to actually get there and is less science, more fiction. But hey 😝
# Going into 2017
So in short: 33C3 was pretty epic! And I honestly can't wait for next year. It's not clear yet where it will be held but it will be epic non-the-less. And who knows, maybe I have a talk to hold by then 😊
Which brings me to this year. Last year was fucking shitty. Politically...On a personal level it actually went quite well. And I got a lot of shit done. I did Google Summer of Code, I made *huge* progress on my game project (yes, I will post about that at some point). And especially in the last months of the year, I redesigned and rerouted the Open Plantb0t board. On january 1st, 2017 the revision A2 design went into production.
I hope to get all my parts together soon and build up a second prototype series which (hopefully) works better than the last 😉 I will keep y'all updated on that.
pel
Until then, I hope you've had a happy new years eve and not an all too terrible year...yet 😉

@ -0,0 +1,114 @@
Title: LibGDX interface containers
Category: Blog
Tags: /dev/diary, libgdx, game dev, java
Date: 2017-01-24 00:14
**Let me tell you a factual statement**: UI programming is terrible
**Let me tell you an even more factual statement**: UI programming in LibGDX is even more terrible
I am a big fan of LibGDX. It's a really nifty library/ framework to get started with game development if you're more comfortable inside a code editor than a full blown game engine that is more targeted towards designers and artists. And I put my money where my mouth is: I have a series about LibGDX development for beginners on this blog and work almost exclusively with it when it comes to my own projects.
Yet, there is something that bothers me and there didn't seem to be a great solution to fix it. UI code structure. In this post I want to highlight a utility I have written for LibGDX which is very easily embeddable into your existing projects which will you help structure UI code more efficiently.
# The root problem
The reason I dislike UI programming with LibGDX is that it usually results in very long code files or passing dozens of parameters into sub-classes that are needed to update the UI for button presses, etc.
This goes so far that I have written an editor for game assets before just to realise that (once the development was complete) it had become completely unmaintainable and I had to start from scratch with better structure. It is incredibly easy to just throw out a UI design with Scene2D and LibGDX but unfortunately it is equally easy to produce very bad code which will turn into a big spaghetti mess.
Let's look at an example problem that I wanted to solve.
![LibGDX UI design problem](/images/libgdx_ui/01_base_problem.png)
Looking at this structure we have three main components that interact with each other. We have a class that handles UI logic (setting up actors in tables, adding listeners, etc), we have a window state which in the particular case which made me write an alternative was a "Lobby Handle" which coordinated what players were going to enter a match, the map, game mode and if everybody in the multiplayer match was set to "Ready". Lastly we have the actual network signal handlers that listen to TCP/ UDP packets and execute code to write/ read from the window state as well as update UI elements.
Implementing this structure with Scene2D and LibGDX will result in a lot of very ugly code. Because the network signals need to know everything about the UI (how it is structured, etc). And our window state can be written to by two different sources which means that we need to mutex it to avoid race conditions.
# Maybe a solution
So, what was I trying to solve? First a bit of limitation of scope. Because a lot of UI problems have been solved over and over again and usually at the cost of runtime performance or with a *lot* of extra code.
1. UI code doesn't have to be embedded in a screen
2. All UI code can access the shared context of the screen
3. UI elements can update each other
4. Clean API that can be called on from anywhere (with a reference to the handle) that triggers range of functions.
So with that in mind, this is what I did.
```java
class MyUIHandle extands UIHandle {
public static enum UI implements UI_BASE {
PLAYER_LIST;
}
{ /** Initialiser block for new objects */
registerHandle(new PlayerList(), UI.PLAYER_LIST);
// ... more handles
}
@Override
public void initialise(Stage s, Object ... var) { ... }
public class PlayerList extends UIContainer {
@Override
public void initialise(Stage s) { ... }
// Define more API here ...
}
}
```
When we initialise a new `UIHandle` the initialiser block will create our `PlayerLists` and register them with the `UIHandle`. That code is hidden away from you. You can see that we're implementing a different enum type that we overload with values so that we can address submodules via a compile-time checkable value (such as enums). From inside (and outside) this class `UIContainer's` are available via `handle.get(UI.SUB_HANDLE)`. Obviously keeping your enum labels short will make your function calls snappier :)
The following graphic will sort-of explain the layout in more detail.
![Super UI fixing attempt](/images/libgdx_ui/02_ui_structure.png)
What you might also notice is that the `UIHandle` has an initialise function with variadic parameters while the `UIContainer` class only takes a stage. That is because window context is stored once in the `UIHandle` and then accessable from all `UIContainer` classes. This way we only need to do the inversion of control pattern once instead of for every sub-component.
You can keep the `UIContainer` classes outside this code-file. Then you might however want to provide a construct that does another inversion of control so that an external `UIContainer` can access the context provided via initialise!
```java
public class PlayerList extends UIContainer {
private MyUIHandle parent;
public PlayerList(MyUIHandle parent) { this.parent = parent; }
// ...
}
```
Now let's talk about that public API. In our original example we wanted to have networking code update some UI elements. And we want UI elements to update other UI elements. So first of all, we keep context in each `UIContainer` about what UI elements are accessable to it. So what we can do in every of our submodules is this:
```java
parent.get(UI.PLAYER_LIST).updatePlayers(playerList);
```
It also means that if we get new data from – say – a network socket or AI simulation, we can very easily update data in some random UI element.
```java
handle.get(UI.PLAYER_LIST).populate(playerList);
```
So all in all, we have solved the following problems:
1. We have access to all game state in the UI code without passing too many parameters into lots of sub-classes
2. UI code can be moved into lots of files for easier understandability
3. Context isn't duplicated
4. UI code can update other UI code without needing a direct reference to it.
The individual `UIContainer` instances are essentially independant of each other via dependency injection.
This library isn't done yet. Most of this is kinda hacked together to fit into **my** game. But I'm interested in making it more generic and putting it on Github. Especially because I can see myself using it again in the future.
Hope this might be useful to somebody out there. If you have questions, comments, hatemail...
[Twitter](https://twitter.com/spacekookie) or [E-Mail](mailto:kookie@spacekookie.de)

@ -0,0 +1,60 @@
Title: Dabbling with Moonscript
Category: Blog
Tags: /dev/diary, moonscript, programming
Date: 2017-05-06 11:55
![Lua means moon in portuguese](/images/lua_moon_banner.png)
Recently I've started learning/ using Moonscript. It's a language that compiles to [lua](https://www.lua.org/) and as such can run in the LuaJIT, an alternative lua engine which allows very easy and *fast* ffi calls into native code. This makes lua code capable of writing very performant applications and games that use native rendering, window creation or general libraries.
But in my opinion lua has always felt a bit cumbersome. I use awesomewm so I had to write it occasionally to customise my UI layout. And this is where Moonscript comes in. It's a lot of syntactic sugar on top of lua as well as some other concepts such as object orientation which lua just plain out doesn't have. And while yes, you can write good code without OO (*cough* **C** *cough*) it is a nice tool to have in your pocket, especially when writing GUI applications or games.
## The language
```Moonscript
class Thing
name: "unknown"
class Person extends Thing
say_name: => print "Hello, I am #{@name}!"
with Person!
.name = "MoonScript"
\say_name!
```
As you can see Moonscript is an indentation based language which (in my opinion) combines syntactic elements from lua and ruby together. In the snippet above (which is from the [moonscript website](http://moonscript.org/)) you can see classes, inheritance as well as the `with` keyword which allows you to initialise/ work with objects without typing it's variable name over and over again.
If you want to learn more about the language, I can only recommend you have a look at the [Moonscript in 15 minutes guide](https://github.com/leafo/moonscript/wiki/Learn-MoonScript-in-15-Minutes)
## How to use it
You can just write Moonscript files, add `#!/usr/bin/env moon` to them and get going. Obviously that's pretty cool for little scripts that you just want to get going. But not so great for larger applications because a) you don't have access to `ffi` via luaJIT and b) it adds additional startup cost.
So instead for my projects so far (which so far are a [game](https://github.com/spacekookie/dinodino) and a desktop app) I use a `Makefile` to build and run the Moonscript compiler and then execute the `init.lua` with luajit.
```Makefile
SOURCES := $(wildcard *.moon) $(wildcard **/*.moon)
LUAOUT := $(SOURCES:.moon=.lua)0
.PHONY: all run build
all: run
build: $(LUAOUT)
%.lua: %.moon
moonc $<
run: build
luajit init.lua
```
## Wrapping up
So...I'm kinda excited about this. Most of the code I write is either in C or Java (depending on what exactly I'm doing). And those two strongly typed and compiled languages have served me well and will continue to be my go-to solutions for a lot of problems.
But I've long been looking for a dynamicly typed, interpreted/ just-in-time compiled language that I can use for anything from little scripts to medium-sized desktop applications. I used to use python for this but have recently (over the last 6-9 months) fallen out of love and developed a rather passionate dislike of it and it's ecosystem.
My current project will get it's own little article at some point but I don't mind teasing the progress here. I'm writing a new UI for redshift which works with X11 linux backends and is heavily inspired by f.lux on MacOS. It's written in moonscript, with my own forked version of redshift (which I call [libredshift](https://github.com/spacekookie/libredshift)). It's on [github](https://github.com/spacekookie/redshift_ctrl) and licensed under MIT.
Hope I've made you a little curious about Moonscript :)

@ -0,0 +1,14 @@
Title: Rebuilding my Website (again)
Category: Blog
Tags: /dev/diary, meta
Date: 2018-01-3 01:31
It's winter, rebuilding my website is a tradition...right? **Happy new year everybody.**
This has been a long time coming. I've not really been happy with the way my website looked for a while and have been playing around with new designs for the past few months. I also took that opportunity to throw out a few old articles, fix formatting on others and generally do house-keeping.
The whole thing is still using [Pelican](https://blog.getpelican.com/) to generate pages but now with a completely new theme and new plugins 🎉
This re-design also decreases complexity. The old theme was massively too complicated and I've now taken it down to 3 (or 4?) templates. Working around the old theme and what Pelican expected was an interesting experience. Especially since it felt less like building a website and more like working with a game engine where small changes make lots of magic happen and *voilá*.
There are a few things I want to write about...*soon*. Until then, there is an easteregg hidden somewhere. Let's see who finds it first 😉

@ -0,0 +1,57 @@
Title: Failure. Or: why Rust is probably the best programming language ever created
Category: Blog
Tags: /dev/diary, reflections, programming, rust
Date: 2018-03-11
*This post is two stories.* One is about accepting and recognising personal failure, reflecting and growing from it; the other is about an incredibly and seemingly endlessly powerful programming language, called *Rust*.
**In the summer of 2014** I started a project which was kind of insane. I knew it was insane, yet I embarked on that journey regardless. I wanted to write a password manager. I chose Ruby as a language because I didn't know many others and was – in more than one way – still a programmer novice.
The details of development aren't too important. About 6-8 months into the project I had written something rather cool and functional. It wasn't very fast, the code base was a bit of a mess and I was having issues with packaging. But, at the core, I really liked what I had made, which had shifted from just being a password manager to being a universal, platform-independant secrets manager, close to a keychain. In my mind applications could write sensitive information into a "vault" which was managed by this project, without having to worry too much about access rights, authentication or anything else.
# So far so good; this is how both stories start.
Over the next few years this project would take me over and, ultimately, destroy me. I had gotten it into my mind that the cryptography should have been handled by something more low-level, something more "advanced". I talked to people, I looked at languages and in the end, thinking I had more experience now, chose C++ to re-write the project in. *This was the beginning of the end.* It took me another six months to get the basics done, getting caught up on nitty gritty details.
I ended up switching to C, back to C++, *then back to C again*, not being satisfied with the way that one or the other language handled things. And the scope was out of control. I didn't want to make a cute little secrets manager anymore. I wanted to make a database. It had transactions, sharding, multi-user access, backups, countless optimisations, run modes and even it's own SQL-like query language. I went completely overboard and lost all grasp of what it was I wanted to create. After literally years, re-writing the same parts of the code again and again, creating new libraries to handle even smaller tasks that had been completely trivial in Ruby, **I stopped.**
What this project had turned into wasn't maintainable. It didn't even really make any sense. It had no use-case, besides "being cool" and that wasn't really enough to motivate me anymore. I was also caught up with other work, getting involved with the Google Summer of Code 2016, then slowly fading work on the project into the background. This wasn't a conscious decision though. In my mind, I was just putting it on hold, learning from all of my failures and then trying myself again at another re-write. I didn't know that *not* trying again would be the act of learning from it.
<br/>
As hackers, we are often compelled to take on the world. Everything seems plausible, sometimes trivial. We understand technology in a way that most people don't and in that, we gain confidence in our abilities past the point of reality. Hubris. We want to make things, break things, modify things. And we forget our own limitations, time and scope. We end up starting so many things that we never finish. Or we get obsessed with something that doesn't make any sense.
It took me over 2 years to understand that I can't let my impulse to adventure drive the way I work. I love open source and I love working on things that are just *free* and out in the open. I want to help build an ecosystem of tools and applications that help people, without any cost or baggage of being for a closed down system. But learning, that there were things that I can't do, that maybe the way that I viewed work, problems and how to tackle them was *fallable*, that took some more time to understand. In the end, everything I did on this project was a collosal waste of time. It's still on my github, more as a reminder to myself of how failure works...
It has nothing to do with not knowing how to solve a problem. It has nothing to do with failing to understand code or a language or a toolkit... It has *everything* to do with not knowing how to limit a project, **and when to stop...**
# This is the end of story one
It's been nearly a year since I worked on this project (or the 5th iteration of a re-write anyways), in the meantime I've worked on many small things, trying to keep in mind what I want to do, what is plausible and also useful. And in the meantime I've come into contact with a magical programming language: *Rust*!
I had started programming in Rust before, during a very stressful time in 2016. And I never managed to get into it much. This year was different though. The toolchain had matured and maybe I had also matured as a developer. And maybe I was in a better state of mind to learn new concepts. Whatever it was, I'm glad it happened.
Rust is a systems programming language by Mozilla. It's a compiled and safe language which prevents segfaults and allows for *fearless concurrency* (as they put it), without sacrificing speed. In fact, it runs [almost as fast as regular old C code](http://benchmarksgame.alioth.debian.org/u64q/rust.html).
Now though, that's only half the reason why Rust is amazing, and I will show in a moment how the first part of this story is in any way relevant.
After creating a few smaller projects in Rust, I started thinking again about password managers. The landscape looks a little different now than it did in 2014, yet I'm still not 100% satisfied. Everything ends up being an add-on to keepass which, in my opinion, doesn't have a very good database layout or file format. And end-user applications are usually very complicated or badly designed. I know I have high standards when it comes to UI/UX design so please don't consider this slander about these projects. I just didn't want to use them and had been sticking to my Windows application running in a WINE for the last few years.
Now I knew Rust. Because the amazing thing about Rust is only partially the language. The other half is the entire toolkit that comes with it. From a built-in version manager to a kickass package manager, an ecosystem of `crates` that can be easily included, following the UNIX philosophy of enabling you to do small things, in good ways, yet somehow always fitting together.
I remembered what I had thought about before. Limitation of scope, accepting limitation of time and limits also in my own abilities. And I started writing a project very close to what it had once been in Ruby. And within a week or so, I was close to the feature set of the version I had finished, before beginning my descent into madness. It took me a week of collecting external crates, writing a few hundred lines of code myself, playing around with different crypto backends and there I was, in the process of building something cool again.
Rust makes it incredibly simple to do rapid prototyping. Yes, the language is very strict sometimes. Yet it has this feeling of "throwing shit against the wall" and seeing what sticks. With the added benefit that there are compile-time checks that make sure that there are no serious issues with your program. You can still write bad code, it just seriously limits the damage you can do. And that makes it incredibly fun to write with.
# What's the point of all this?
Well, first that I love Rust 😝.
But secondly that sometimes failure looks different than what we might expect. It's not about failing on a technical but either on a social or planning level. And maybe that this is something we should talk about and foster in the hacker community.
Rust is an amazing language for many things but it also has it's limits. There are countless people in the hacker culture who stick to their technologies because they feel familiar, dragging others into their little bubbles because they want to expand their influence, never considering if what they're advocating is sensible or scalable.
This isn't just something we (as a culture) do with tools, it also happens on a social level. And in the end, shouldn't we all strife to learn new things, broaden our horizons and, last but not least, choose the right tool, for the right job?
Sometimes a new technology can enable us to break out of our bubble and achieve something that we previously thought impossible.

@ -0,0 +1,115 @@
Title: `home-manager`. Or: how not to yakhave
Category: Blog
Tags: /dev/diary, reflections, programming, nix
Date: 2019-03-21
Don't expect the bait-and-switch titles to remain forever.
I just thought it was fitting for this one too 😉.
## Some background
Ever since I started venturing into computer programming and
using more and more tools that used `dotfiles`, I've been
frustrated at the lack of good tools when it came to synchronising
these files.
And that's not for lack of options.
Either I disagree fundamentally with what I want a sync tool to do
or nobody had come at the problem from the same angle as me before.
This is not to bash on other projects or solutions.
I know many people who are very happy with
either keeping their dotfiles in a large git repo, symlinking manually,
making `~` a git repo or using various tools that automate the symlink process.
The problem I had was that not all my computers were the same.
As in, I didn't neccessarily want the same configs on all of them.
Because this is where it becomes complicated.
I had been thinking about writing a tool to do the things I wanted to do before.
Thinking back to my post about failure and limiting scope, I never started it.
While making a lot of drawing board attempts, I never wrote a single line of code
because I could tell that it would lead me down a dark path.
Reading about people on reddit from time to time who started their own
"ansible but for dotfiles" projects that never went anywhere,
I felt like I was doing the right thing by not even starting.
So nothing happened for a while. Until recently.
## Enter: nix
In case you don't know it, `nix` is a functional programming language
and pure package-manager for unix systems.
Yes, that includes MacOS.
There is a linux distribution built around it called [NixOS] which utilises
`nix` as a package manager and configuration language heavily.
[NixOS]: https://nixos.org/nixos
So what does pure-packaging really even mean?
Have you ever had the situation where you upgraded your system
and half way through something failed and now you ended up with a broken system
because some of the packages had already changed while others had not?
Yea, that's what people would call "impure packaging".
With `nix` on the other hand this cannot happen,
since packages are atomic, meaning that after something is built,
it can't be changed again. Doing an update? New package.
Changing a small config? New package.
It means that not only can failing upgrades be seemlessly be rolled back
but also that two different versions of the same library
can easily be installed at the same time.
Yes, that's right: no more "DLL hell"!
Well okay so why am I fangirling about `nix` here?
Apart from the fact that I've been dabbling with NixOS quite a bit recently,
to the point where I have basically replaced all my [Arch] installs with NixOS now...
[Arch]: https://archlinux.org
## Enter: home-manager
You already saw it in the title of this post, but I wanted to re-introduce it.
What exactly is `home-manager`?
It's `nix`, but for your userspace!
Not only does it not require root permissions,
meaning you can install packages just for you locally
(well okay, `nix` can do this as well but...besides the point).
More importantly, `home-manager` adds modules and utilities to manage userspace configurations.
Everything is sourced from `~/.config/nixpkgs/` (you can move that IIRC)
which is then used to generate all your configuration files.
Config files are kept in the nix store (which is usually located at `/nix`)
and then symlinked to their destination.
Right. Now I can practically hear you all saying:
"but didn't you say you didn't like tools that just symlinked stuff?"
Well..yes, and no. Obviously symlinking a config is much nicer than having to copy them around.
What I disliked about tools that symlink configs to places from some other place
was that I was still responsible for manage that "other place".
With `nix`, I never touch the store directly. In fact you don't ever do that!
Instead I edit the `home.nix` configuration (or sub-configs when it gets too complicated)
that then take some inputs, define the outputs and `nix` makes sure that the configs
are then where they need to be.
It gives me a single source of truth, but the best thing is
that it's not a dumb source of truth: nix is a programming language!
What that means is that I can dynamically adjust some config contents
according to what system I'm running on, while not having to worry about
keeping it all sane.
I'm really happy I didn't write my own "ansible but for dotfiles"
and I think I'd recommend nobody do that
(I mean...unless that's your kink - I don't judge!).
But I'm even more happy of having been introduced to `nix`
and `home-manager` in particular.
I'd much rather help write some re-usable modules,
that other people can also take advantage of,
than reinventing the wheel from scratch. Again.

@ -0,0 +1,152 @@
Title: Rust 2019: how we make decisions
Category: Blog
Tags: /dev/diary, rust
Date: 2019-01-21
I'm late to the party, I know.
But writing this post took a little longer.
In fact, I wrote it three times, not really sure where I wanted it to go and what I wanted to say.
In the end, I think most things have already been said.
So this will be short.
## Problems
There have been a great number of blog posts about the problems that the Rust community is facing.
I recommend you read some of them because they're really good.
For example [boats] who published an article early december about "Organisational Dept".
Or [killercup] about how we count contributions (and other things).
Or [skade], who published a small collection of articles on organisational subjects.
[boats]: https://boats.gitlab.io/blog/post/rust-2019/
[killercup]: https://deterministic.space/rust-2019.html
[skade]: https://yakshav.es/rust-2019/
There's many more under the [#rust2019] hashtag on twitter.
[#rust2019]: https://twitter.com/search?q=%23rust2019&src=typed_query
My point is: you can read about what the issues are elsewhere, from more perspectives.
There's no point in me trying to rehash the same stuff again.
I'm not that good a writer that I will bring anything to the table that these people haven't already.
In my opinion the issues we have at the moment are because of two things.
- Teams that have too much responsibility
- Bad tools
(I won't really talk about the tools issue in this post.
In summary: the tools we use to communicate are all garbage.
And github and discourse comments are terrible for debates.
Really, any linear chat platform is terrible
- there might be another article at some point)
As I've said, there are plenty of articles that go
into _what_ is the problem with our current process.
I want to focus on the _why_, _how_ and _how we stop it_ bit of that.
And first we need to talk about how things are built.
## Vision vs Implementation
Fundamentally there are two parts to design:
**Vision** is what drives a project forward,
creating new concepts and thinking about their implications,
while **Implementation** is finding the most elegant,
sustainable and performant way of making vision a reality!
Now, these two can very often not be separated as easily as I make it out there.
Take the example of the CLI-WG.
When we first assembled earlier this year, we started working on a vision:
"how should CLI development look".
What followed then was an implementation of the vision,
as closely as we could manage with the resources and time available.
I would argue that during the implementation period of the vision,
some aspects of *what* we thought should be done
were influenced by things we learned about *how* to do them.
Like hitting a moving target.
To some extent this is how most software development works,
unless you are working towards a *very* well defined spec!
Having the same group of people be in charge of both the overall
vision for a system and it's implementation isn't a bad thing,
*given that the system is small enough!*
And this is where teams come in!
Splitting up the community into smaller groups who work on the same stuff,
so that this kind of collaboration becomes possible again.
In a way, the Rust 2018 working groups were inspired by the same idea.
The difference between teams and working groups being,
that the latter has a more loosely defined governance structure and allows people
to join and collaborate easier and more quickly.
There's less "established structure" in a working group.
These lower barriers are one of the reasons why I'm a huge fan of working groups,
and feel like the Rust project should expand on them in the future.
Maybe some teams could even be replaced.
But that's not fundamentally the issue that the project is facing.
## Blurring the lines
Problems arise when these lines are blurred too much.
This happens with both discussions, as well as to teams of people,
who get involved in too many things.
Ultimately, we need to face the fact that days are short,
people's time is limited and the number of responsibilities a single person can have
isn't infinite!
Througout previous blog posts and conversations with others,
the need or "desire" to have better communication channels has been made clear.
And I feel that we need to work on the way that we communicate,
if we are ever to fix the way that we make decisions.
But before I go into detail about what that means to _me_, specifically,
I want to talk quickly about the core team.
The core team is a medium sized group of Rust developers,
who oversee large areas of the development of Rust.
As the website puts it:
> Overall direction and policies of the project,
> subteam leadership, cross-cutting concerns.
As such, I feel it would be the perfect candidate to step back from implementation,
and focus on both vision for the entire language,
as well as communication among smaller teams.
Unfortunately this hasn't seemed to be the case in the past
and is something the core team should work on this year.
Note that I think, that this is a responsibly that should also not be taken lightly.
Most of the members of the core team are also active in other teams,
sometimes even leading them.
I feel that this is one of the reasons why this role has been neglected.
## Solutions
So I would sugest something, that I've heard others talk about before,
although rarely publicly, which might be considered a bit of a hot take.
The core team should be rotated.
What that means is that the core team in itself still exists,
does a certain number of jobs and should be considered quite a time-intensive commitment.
But the people involved in it shouldn't stay involved with it forever.
Even if it's currently (practically) already the case that the team rotates,
making this more explicit and making the roles of the core team a central part of
how other teams communicate and operate, I feel would benefit the overall
contribution climate of the rust project significantly.
Ultimately, we have a problem in how we communicate.
The lines between *vision* and *implementation* get blurred too often.
Not only in RFC discussions.
Implementation specific bikesheds on github
often result in new rationale being pushed forward,
that have nothing to do with the actual question at hand.
And as such, people often talk past each other.
I don't know how wide the communication scope of the core team should be,
but I definitely think that moving it's responsibility away from implementation
and back towards communication and fostering collaboration between teams and working groups
is the approach we will have to take this year to solve our problems of
Organisational dept!

@ -0,0 +1,107 @@
Title: Hacking is political
Category: Blog
Tags: /dev/diary, CCC
Date: 2019-01-02
I'm just coming back from the Chaos Communication Congress,
a four day event just after Christmas.
It was my fourth one in total, and now the third in a row
(the first being 25C3 as a smol girl).
It's hard to describe the C3 (abreviation for the congress, opposed to the CCC, the club).
Some call it a "hacker conference" which is...in some ways accurate,
but often doesn't manage to capture what it is.
Not to mention relies on the external definition of a "hacker" to describe it.
Other's call it a "tech event" or "tech conference" which really isn't accurate either.
There are lots of artists and non-tech people represented
and I feel these experiences shouldn't be ignored.
The C3 has been in Hamburg for quite a while but was forced to move last year
due to the congress centre there being remodeled (read "torn down").
After remodelling the venue would also not be suitable for the event anymore,
meaning that a semi-permanent move had to be initiated.
**I didn't enjoy last year's Congress.**
Not only was it plagued by loads of logistical problems and bad adaptation of the new space,
there were political issues around the organisation of the event and how decisions were made.
Last year's congress showed off that the CCC (the club) had a problem with apoliticality.
## Apolitical Hackers
I've been rather loud about the apolitical or centrist parts of the hacker movement.
Conflation between the terms "maker" and "hacker" have further undermined the movement
with capitalist and neoliberal ideas.
That isn't to say that everybody in the hacker scene
needs to be continously aware of all political implications of their actions at all times.
Danger arises from people who don't feel like political action is important _at all_
or who represent centrist, capitalist and neo-liberal viewpoints.
This includes refusal to take action against climate change
or supporting the police, regulatory bodies and disregarding free software movements
for reasons of convenience.
I'm taking about these things in rather vague terms because I want to avoid
drawing a definite line that people can't cross.
Really, I would argue that there's many ways to be a hacker.
I see issue and a danger to the movement,
when people attempt to "leave politics out of hacking" entirely.
This includes fighting capitalism and discrimination against minorities.
Hackers, by definition, are the political left!
Anyone who isn't and still claims to be a hacker has successfully co-opted the word
and is attempting to undermine the movement for their own political gain.
It's not always obvious how the existence of apolitical hackers impacts the movement
or the technologies that they build.
But just like Tech in general has racism and sexism problems,
so does the hacker movement.
Society does, really. There's no way to remove a community from time and space.
"Stuff" happened before we got here, and pretending that it didn't, is dangerous.
I could talk about the impact of apoliticality on technology
and communities that are being built for a very long time but I really don't want to today.
Instead I want to focus on something else, something more positive.
## A very political congress
I very much enjoyed this year's congress!
Maybe in part that was because of the people I was attending it with this year ( 😉 )
but in no small part, it was also because of the general atmosphere around the event.
This isn't something only I noticed.
I had conversations about this with others,
who apparently also noticed this change from last year.
The first thing you saw when entering the venue was a huge Antifa flag,
setting the tone of the rest of the event.
Apart from that there were a lot more assemblies this year,
many were dedicated to squatting, anarchy and fighting capitalism.
There were a few queer assemblies, scattered around the hall,
making it so that queer and leftist representations weren't all bundled in one spot
but were present all through the venue.
Even purely technical assemblies were surrounded by antifa flags and anarchist stickers and flyers.
This changed the atmosphere quite significantly.
It wasn't perfect.
Just like last year, it was plagued by logistical problems,
some smaller, some larger.
There were people who had their stuff stolen.
There were speakers who made problematic jokes on stage.
There were still apolitial and centrist people present.
In fact, there were people complaining about C3 "having gotten so political",
which really is a good thing.
We want the centrist and right-leaning "hackers" (read: libertarians) to feel uncomfortable.
But despite all that, the air, the general atmosphere of the event was different.
I welcome this change.
And I hope that it sets a new tone for the CCC and the C3 in general.
I would very much enjoy going back next year
and finding out that the event had become even more overtly anarchist.
Because we shouldn't forget our core motivation as hackers:
**being gay, and doing crimes!**
The only downside from 35C3? I don't really know what to think about birds anymore...

@ -0,0 +1,168 @@
Title: Allocations are good, actually
Category: Blog
Tags: Rust, programming
Date: 2019-04-07
Something that you can often hear in the Rust community,
especially from people who were previously C or C++ developers,
is the adage "allocations are slow".
The other day a friend asked me how to create a consecutive list of numbers.
I pointed her at `(0..).take(x).collect()` which can be made into a `Vec<_>`,
with a number of her choice.
It did made me think however about how this could be done
much nicer in a allocation-free manner.
It lead me to come up with the following code which creates a `[_; _]` slice,
depending on which integer representation and length you choose.
```rust
(0..)
.zip(0..x)
.fold([0; x], |mut acc, (step, i)| {
acc[i] = step;
acc
})
```
So with this in mind, I wanted to run some comparisons.
I chose the numbers so that 32768 consecutive numbers would be generated.
I compiled the example with both `Debug` and `Release` mode.
(All of these measurements are done with `rustc 1.33.0 (2aa4c46cf 2019-02-28)`)
Let's start with the non-allocating version.
```console
$ time target/debug/playground
target/debug/playground 1.45s user 0.00s system 99% cpu 1.457 total
$ time target/release/playground
target/release/playground 0.27s user 0.00s system 99% cpu 0.270 total
```
Cool! So as you can see, the `Release` profile is over 500% faster.
And performance-wise this is quite reasonable.
Let's see how an allocating implementation stacks up to it.
The code used here is the following.
```rust
let vec: Vec<u32> = (0..)
.take(1024 * 32)
.collect();
```
So how fast is this gonna be?
```console
$ time target/debug/playground
target/debug/playground 0.01s user 0.00s system 93% cpu 0.010 total
$ time target/release/playground
target/release/playground 0.00s user 0.00s system 85% cpu 0.005 total
```
What? ...it's faster?!
Well, I guess this does to show that it's not as simple as saying "allocations are bad".
Avoiding allocations at all cost can slow you down.
Thanks for coming to my TED talk!
*. . .*
## Yes but *why*?
Okay maybe you're more curious than that and want to understand what's going on here.
So come along, let's read some assembly!
Let's focus mostly on the release profile here,
because `Debug` generates a lot of code that makes it harder to understand.
So we have two code snippets that we should throw into [godbolt] to see what rustc does.
[godbolt]: https://rust.godbolt.org/
```rust
// This doesn't allocate
const length: usize = 1024 * 32;
pub fn slice() -> [u32; length] {
(0..)
.zip(0..length)
.fold([0; length], |mut acc, (num, idx)| {
acc[idx] = num;
acc
})
}
// This does
pub fn vec() -> Vec<u32> {
(0..).take(1024 * 32).collect()
}
```
Let's have a look at the assembly that the `vec()` function generates.
<skip>
```gas
.LCPI0_0:
.long 0
.long 1
.long 2
.long 3
# ... snip ...
example::vec:
push rbx
mov rbx, rdi
mov edi, 131072
mov esi, 4
call qword ptr [rip + __rust_alloc@GOTPCREL]
test rax, rax
je .LBB0_4
movdqa xmm0, xmmword ptr [rip + .LCPI0_0]
mov ecx, 28
movdqa xmm8, xmmword ptr [rip + .LCPI0_1]
movdqa xmm9, xmmword ptr [rip + .LCPI0_2]
movdqa xmm10, xmmword ptr [rip + .LCPI0_3]
movdqa xmm4, xmmword ptr [rip + .LCPI0_4]
movdqa xmm5, xmmword ptr [rip + .LCPI0_5]
movdqa xmm6, xmmword ptr [rip + .LCPI0_6]
movdqa xmm7, xmmword ptr [rip + .LCPI0_7]
movdqa xmm1, xmmword ptr [rip + .LCPI0_8]
.LBB0_2:
movdqa xmm2, xmm0
paddd xmm2, xmm8
# ... snip ...
ret
.LBB0_4:
mov edi, 131072
mov esi, 4
call qword ptr [rip + _ZN5alloc5alloc18...@GOTPCREL]
ud2
```
(full code dump [here](https://pastebin.com/zDXi7qtt))
</skip>
As you can see this uses the "Move Aligned Packed Integer Values" instructions in x86_64.
From some `x86` docs:
> Moves 128, 256 or 512 bits of packed doubleword/quadword integer values from the source operand (the second operand) to the destination operand (the first operand).
Basically the LLVM can figure out that our numbers are predictable
and can allocate them in a way that is batchable.
We will already see how the non-alloc code is going to be slower:
because the code that assigns numbers is less unterstandable to a compiler
(i.e. assigning values to an array sequencially) this will not end up being batched.
That's not to say that alloc code is going to be this fast on every platform
(RISC instruction sets lack many vectoring techniques)
and this doesn't even take embedded targets into account.
But there you have it.
LLVM is magic...
... and saying "allocations are bad" really isn't telling the whole story.

@ -0,0 +1,173 @@
Title: Bikeshedding disk partitioning
Category: Blog
Tags: linux, zfs, nixos
Date: 2019-07-11
I recently got a new Thinkpad. Well...new is a stretch.
It's an X230, featuring an i5 and 16GB of RAM.
One of the first things I did with this laptop was to [flash coreboot on it][coreboot].
This is something I've always wanted to be able to do,
but so far lacked hardware that was supported.
And generally, it felt like finally maybe I could have a laptop to tinker around with.
[coreboot]: https://octodon.social/@spacekookie/102150706024564666
And that's where this post begins...
## Encrypted disk
So from the start I knew I wanted to have a fully encrypted disk.
What that means is that your `/boot` partition
(whether it is it's own partition or not), is also encrypted.
Secondly, I don't like (U)EFI...
What that means is that I'm installing GRUB
in the MBR (with a DOS partition table) instead.
Now: GRUB stage 1 can handle the encryption for us,
but there's some limitations
- Keyboard layout limited to `US`
- `/boot` can only be certain partition type
- `/` and `/boot` need be contained in an LVM
That last one _might_ not be accurate if you only want
to have an `ext4` (or similar) rootfs. But because I
want to have a `zfs` root, I need to embed it into an LVM.
This is also the reason why `/boot` needs to be it's own partition.
After we've done all this, we will install a linux distribution of choice
(which we'll reveal later).
Anyway, let's get started!
## Preparing the disk
(Feel free to skip this step)
Something you might want to do is letting your disk look
otherwise uninitialised, or "securely erasing" any data
that is already on it.
But generating random data is a lot of work and `/dev/urandom`
is very slow.
Instead you can create a crypto-disk (luks) on it, then fill it
with zero's. But because of the encryption it will seem random.
(`/dev/sda` is my disk in this example because lolwat is nvme even?)
```console
$ cryptsetup luksFormat /dev/sda1
$ cryptsetup luksOpen /dev/sda sda_crypto
$ dd if=/dev/zero of=/dev/mapper/sda_crypto bs=512 status=progress
```
This might take a while, but considerably less time than filling
the disk with random data. After this is done, you might want to
actually wipe the first bunch of bytes.
```console
$ cryptsetup luksClose sda_crypto
$ dd if=/dev/urandom of=/dev/sda bs=1M count=8
```
## Basic partitioning
So what we want to do is setup a single partition on `/dev/sda`,
the same way we did to prepare the disk. Then repeat the previous
command to setup a cryptodisk.
What follows is the LVM setup:
```console
$ pvcreate lvm
$ vgcreate vg0 /dev/mapper/lvm
$ lvcreate vg0-boot -l 1G
$ lvcreate vg0-swap -l 16G
$ lvcreate vg0-root -L +100%FREE
```
I included the `swap` partition in the LVM instead of as a ZFS subvolume
because those can sometimes deadlock and this just makes things easier.
Now we want to create the filesystems.
For `/boot` we can just use `mkfs.ext4`,
but consider that I want to use `zfs` on `/`,
that will require some more work.
```console
$ zpool create rtank /dev/mapper/vg0-root
```
Feel free to call your pool whatever!
At this point you could also create subvolumes to
split `/`, `/home`, ... if you wanted.
## Mounting & Configuration
So that's all good. How do we initialise this system now?
We need to mount the zfs pool first, then `/boot` and then install
our linux secret distribution of choice (spoilers: it's [NixOS]!)
[NixOS]: https://nixos.org
```
$ mkdir -p /mnt/boot
$ zpool import rtank
$ mount -t zfs rtank /mnt
$ mount /dev/mapper/vg0-boot /mnt/boot
$ nixos-generate-config --root /mnt
```
That last line is obviously NixOS specific.
You now have a fully encrypted disk setup, without
using EFI. Wuuh!
The rest of this post I want to talk about how to make this
all work with NixOS and reproducable configuration.
Most of what we need to configure is in the `boot` option.
Let's go through the settings one by one:
- `boot.loader.grub`
- `efiSupport = false` actually the default but I like being explicit
- `copyKernels = true` enable this to avoid problems with ZFS becoming unbootable
- `device = "/dev/sda"` replace this with the device that holds your GRUB
- `zfsSupport = true` to enable ZFS support 😅
- `enableCryptodisk = true` to enable stage-1 encryption support
- `boot.zfs.devNodes = "/dev"` to point ZFS at the correct device tree (not 100% if required)
- `fileSystems."/".encrypted`
- `enable = true`
- `label = "lvm"` the label of your LVM
- `blkDev = "/dev/disk/by-uuid/f1440abd-99e3-46a8-aa36-7824972fee54"` the disk that
ZFS is installed to. You can find this out by looking at your symlinks in
`/dev/disk/by-uuid` and picking the correct one.
- `networking.hostId` needs to be set to some random 8 bytes
Following is the complete config to make it easier to copy stuff from:
```
boot.loader.grub = {
efiSupport = false;
copyKernels = true;
device = "/dev/sda";
zfsSupport = true;
enableCryptodisk = true;
};
boot.zfs.devNodes = "/dev";
fileSystems."/" = {
encrypted = {
enable = true;
label = "lvm";
blkDev = "/dev/disk/by-uuid/f1440abd-99e3-46a8-aa36-7824972fee54";
};
}
networking.hostId = "<random shit>";
```
And that's it.
If you spot any errors in this article (or any for that matter),
feel free to e-mail me or send me a PR over on github.

@ -0,0 +1,92 @@
Title: Usable GPG with WKD
Category: Blog
Tags: gpg, security, usability
Date: 2019-07-02
With the recent [SKS keyserver vulnerability][sks],
people have been <strike>arguing</strike> reasonably talking on the GnuPG mailing list
about how to proceed with keyservers, public key exchanges
and the GPG ecosystem as a whole.
[sks]: https://gist.github.com/rjhansen/67ab921ffb4084c865b3618d6955275f
As part of this [WKD] was mentioned.
It stands for "Web Key Directory" and is a standard
for making a users public key available via their e-mail provider
or server with the domain that corresponds to their e-mail address.
There's several clients (such as [Enigmail] in Thunderbird)
that will use this standard to automatically fetch a user's public key,
when writing an e-mail to them.
[WKD]: https://wiki.gnupg.org/WKD
[Enigmail]: https://www.enigmail.net/index.php/en/
As an example: my e-mails are hosted with [mailbox.org],
but I use my own website as an e-mail alias.
This means that I can make my public key available via my website,
and clients using WKS could then get it automatically.
[mailbox.org]: https://mailbox.org
If you don't have your own domain and use a webhoster instead,
you might still be able to use this.
There's a [list of supported hosters][list] that you should check out.
[list]: https://wiki.gnupg.org/WKD#Mail_Service_Providers_offering_WKD
## Setting this up
(**Note:** in newer versions of `gpg` the tool `gpg-wks-client` is included,
which can handle setting up the folder structure for you automatically).
There's two ways of making your public keys accessable this way:
the advanced and the direct way.
This post will only talk about the latter, because I find it easier.
You need to create a `.well-known/openpgpkey` directory on your server.
In this directory, place a `policy` file.
This can be zero-length, but is used to check for WKD capability.
Next, create a `hu` folder inside it
(<strike>no idea what this stands for...</strike>
— as pointed out by an attentive reader, it stands for [hashed-userid])
[hashed-userid]: https://www.gnupg.org/blog/20160830-web-key-service.html
Next, take the prefix of your e-mail address
(i.e. in `kookie@spacekookie.de`, this would be `kookie`),
hash it with SHA-1 and then encode the output with z-base-32.
You can use [this][cryptii] convenient encoding website.
**Edit:** Also pointed out by a reader, you can actually use
`gpg --with-wkd -l <email>` to display your hashed User ID
instead of using an external resource for this.
[cryptii]: https://cryptii.com/pipes/z-base-32
Export the **binary** version of your pubkey (so without `-a`)
and place it in the `hu` folder, under the name that you just computed.
The resulting folder structure should look something like this:
```
$ tree .well-known/
.well-known/
└── openpgpkey
├── hu
   └── nzn5f4t6k15893omwk19pgzfztowwkhs
└── policy
```
You need to make sure that this folder is accessable through your webserver
(this either involves including it in a static site or configuring nginx correctly).
But fundamentally, that's it!
You can test if it works by setting a new `GNUPGHOME` and running this:
```
$ env GNUPGHOME=$(mktemp -d) gpg --locate-keys <your-email-here>
```
And that's it! Clients like Enigmail, KMail or GpgOL for Outlook
will now automatically fetch your public key for any message they send.

@ -0,0 +1,42 @@
Title: Starting a public inbox
Category: Blog
Tags: blogging, communication
Date: 2019-07-20
I've been a lot more active on this blog in the last year than I have
previously, and that makes me pretty happy. I'm trying to become less
obsessed about publishing only perfect diamonds made of words, but
instead also publishing articles that might still have some flaws or
that I haven't rewritten twelve times yet ;)
As a result of this I have actually gotten more and more e-mails by
people saying that they read my blog, giving me feedback and sometimes
submitting patches to my [website] repository to fix typos or bad
formatting. And that's pretty cool.
[website]: https://sr.ht/~spacekookie/website
Recently I started thinking if the format of e-mail might not be well
suited for comments as well. Not just to me, but to allow other
readers to talk _about_ the stuff I post. I wasn't super sure if this
was such a great idea, after all...this is the internet we're talking
about. Someone will be an asshole and ruin it for everybody.
Then I discovered Drew DeVaults `public-inbox` mailing list,
which is basically exactly what I thought about creating,
hosted on source hut.
It might still be a terrible idea, but it's one I wanna try. I also
wanna automatically post new blog posts _to_ the mailing list, as
plain text, so people don't have to fuss around with my RSS feed if
they don't want to. I will host my `public-inbox` on source hut for
now too, especially considering that I've been trying it out for a lot
of smaller personal stuff, it makes sense. And I really quite like it
(might write about that in the future too)
So, if you have comments, questions or want to fix typos,
feel free to check out my [public-inbox][inbox]
[inbox]: https://lists.sr.ht/~spacekookie/public-inbox
Let's see how this goes then 😀

@ -0,0 +1,114 @@
Title: ociTools in NixOS
Category: Blog
Date: 2019-09-09 18:00
Tags: /dev/diary, NixOS, Virtualisation
With the release of NixOS 19.09 any second now, I thought I wanted to
blog about something that I've been working on, that [recently][0]
made it into `master`, and thus the new stable channel.
[0]: https://github.com/NixOS/nixpkgs/pull/56411
## What are OCI tools?
[Open Container Initiative][1] (or OCI) produced a spec that
standardised what format containers should use. It is implemented by a
bunch of runners, such as `runc` (the Docker/ standard Kubernetes
backend) and `railcar` (more to that later) and outlines in exactly
what format a containers metadata and filesystem are to be stored, so
to achieve the largest possible reusability.
[1]: https://www.opencontainers.org/
The spec is pretty [long][3] and in some places not very
great. There's even a [blog post][4] from Oracle, talking about how
implementing an OCI runner in Rust made them find bugs in the
specification.
[3]: https://github.com/opencontainers/runtime-spec
[4]: https://blogs.oracle.com/developers/building-a-container-runtime-in-rust
## What are ociTools?
So now the question is, what does that have to do with
NixOS/nixpkgs. The answer is simple: I wanted to be able to
containerise single applications on my server, without requiring a
container daemon (such as docker) or relying on externally built
"Docker containers" from a registry.
So, `ociTools.buildContainer` was recently merged into `nixpkgs/master`, allowing you to do exactly that. It's usage is farely
straight forward
```nix
with pkgs; ociTools.buildContainer {
args = [
(writeShellScript "run.sh" ''
${hello}/bin/hello -g "Hello from OCI container!"
'').outPath
];
}
```
The `args` parameter refers to a list of paths and arguments that are
handed to a container runner to run as init. In this case it's
creating a shell script with some commands in it, then getting the
output derivation path. Alternatively, if you only want to run a
single application, you can pass it `<package>.outPath` directly
instead.
There's other options available, such as the `os`, `arch` and
`readonly` flags (which aren't very interesting and have sane
defaults). Additionally to that there's `mounts`.
Simply specify any bind-mount you wish to setup at container init in a
similar way you would describe your filesystem with `nix` already:
```nix
with pkgs; ociTools.buildContainer {
args = [
(writeShellScript "run.sh" ''
${hello}/bin/hello -g "Hello from OCI container!"
'').outPath
];
mounts."/data" = {
source = "/var/lib/mydata";
};
}
```
## Railcar + ociTools
So that's all nice and good. But what about actually running these
containers. Well, as I previously said I didn't want to have a
dependency on a management daemon such as `docker`. Instead, I also
added a module for the afromentioned `railcar` container runner
(Oracle please merge my PR, thank you).
It wraps very cleanly around `ociTools` and generates `systemd` units
to start containers, restarting them if they crash. This way you can
express applications purely in `nix`, give them access to only the
things they need, and be sure that their configuration is in line with
the rest of your system rebuild.
```nix
services.railcar = {
enable = true;
containers = {
"hello" = {
cmd = ''
${pkgs.hello}/bin/hello -g "Hello railcar!"
'';
};
};
};
```
The metadata interface for `mounts`, etc is the same for `railcar` as
for `ociTools`.
Anyway, I hope you enjoy. There is definitely things to improve,
especially considering the vastness of the OCI spec. Plus, at the
moment `ociTools` does require a bunch of manual setup work for an
application to function, if it, say, runs a webserver. It would be
cool if some NixOS modules could be re-used to make this configuration
easier. But I'm sure someone else is gonna have fun figuring that out.

@ -0,0 +1,85 @@
Title: Labels are language
Category: Blog
Date: 2019-09-20 15:38
Tags: politics
A phrase that I've heard way too fucking often recently (this edition
will contain swearing and might not be suitable for children of ages
below `NaN`) is "I don't care about labels, I want to do politics!"
As one might expect, this sentiment often comes from centrists. But
more often than not, it comes from fellow leftists. People who are
otherwise somewhat radical in their approach of the world, people who
think capitalism's gotta go and (sometimes) that states and borders
are bad. And it's a stance that has confused me, and keeps confusing
me and which is why I'm now writing a blog post about it because
apparently that's what I do.
The problem I have with "I don't care about labels" is that it's
analogous to "I don't care about language".
Labels are a linguistic tool to talk about `$stuff` without having to
build up an entire language from first principles in every
sentence. Labels are very useful for general conversation about
things, like "what is a table?", "what is a train?", "what is art?",
etc.
When we look at the definition of labels, there's usually three
kinds. There's labels for **natural things, with natural
definitions**, such as the definition of a prime number. These are
farely rare. Neither the definition of prime numbers, nor prime
numbers themselves are going to change due to cultural context.
Secondly, you have labels that refer to **natural things, with
cultural definitions**. These are things like planets, mountains or
rain. Definitions can change and they're also subject to cultural
differences. What you and I consider "rain" will most likely depend on
where we grew up, if there was frequent rain at all, etc.
The last category are **cultural things, with cultural definitions**,
such as art, sub-categories of it (movies, games, etc), as well as any
identity label. Calling myself an anarchist doesn't naturally depend
on anarchy as a concept occuring in nature, nor can I define it just
by pointing at other properties of natural definitions. Rather, I need
to pre-define a whole bunch of cultural context, for you to be able to
understand why I am an anarchist and what that means.
**And that's the fucking job of labels!** We can't have the same 5
conversations over and over again and we can't rely on the trust that
people around us are always gonna be on our side. We should have
conversations from time to time about what these labels mean to us,
especially when it becomes clear that there's miscommunication.
But also, just because we're having a conversation about labels,
doesn't mean we need to start bikeshedding their definitions and scope
(whether it be anarchy, libertarian socialist, libertarian
communists - these are all kind of similar enough to work with). Their
context is still there to be used.
That doesn't mean that I am okay with any vaguely leftist label. I
have, over the last year or so, become more sceptical of communism,
talking about how you want to guillotine people and similar. Being an
anarchist means being opposed to state violence, no matter who's state
it is. But this isn't a conversation that is easy to have if I don't
already know a bunch of labels and can refer back to them. Furthermore,
maybe I don't _want_ to have this conversation in certain situations
so why would I have to engage with tankies when I don't want to?
Most of the time the people who say "I don't care about labels, I
wanna do poltics", never do any politics due to lack of a platform or
language to engage with similarly minded people about strategy.
That's because political action depends on the people doing it having
some understanding of the work they're doing, how it relates to others
and themselves. There's a reason why minority groups rely on labels
(such as people in the LGBTQ community), and they serve an important
role in our discourse.
This is not to say that we should try to make the onboarding easier
and use less jargon language when dealing with outsiders. Making
people more sympathetic to the radical left is important, albeit not a
job everybody might want to do.
Still...I feel labels are important, especially when we deal with
internal discourse. For the sake of the conversation, and everybody
involved in it.

@ -0,0 +1,183 @@
Title: Rust 2020: the RFC process and distributions
Category: Blog
Date: 2019-11-04 10:00
Tags: /dev/diary, rust, roadmap
I must have missed an e-mail in my inbox, because recently I started
seeing people publish Rust 2020 blogposts so I thought, why not. I
haven't been incredibly involved in the development of Rust in the
last few months, (navigating the delicate balance of being self
employed, working on free software and not burning out) but I feel
like that might change again. And also, I still have *feelings about
software*.
This post was also largely inspired by my friend [XAMPPRocky's post][erin],
as well as attending [NixCon][nixcon] a few weeks ago, and generally
interacting with the NixOS RFC process.
[erin]: https://xampprocky.github.io/public/blog/rust-2021/
[nixcon]: https://2019.nixcon.org/
## What even is an RFC?
An RFC, or "request for comments" is a mechanism by which a group of
people can get feedback from a wider community on proposed
changes. The idea is that a written proposal outlines a change's
scope, implementation details, rationale and impact on the ecosystem,
then people make comments on the proposal. Usually by the time that
everybody has stopped shouting at each other, the RFC is ready to be
merged, meaning it is accepted and its vision can be implemented.
This can either be implementing a feature, or removing `unstable`
flags from it.
Unfortunately I'm not being too flippant here: the procedure of how an
RFC goes from "proposed" to "accepted" is very vague and can depend on
*a lot* of factors. Needless to say, this can also be the source of a
lot of conflict in a community.
Rust has had an RFC process for a few years now, and most, if not all
decisions to the language and ecosystem have gone through it, and the
community feedback it entailed. Some go largely overlooked, like [this
one][rfc1] that I co-authored at the Rust All Hands 2018 in Berlin
(it's fine, I understand), others get hundreds of comments. This often
results in no meaningful conversations, in large parts because it's
hard to have a discussion with 1000 people, and in other part because
GitHub is a *terrible* platform to do anything on (sequel hook).
[rfc1]: https://github.com/rust-lang/rfcs/pull/2376
## RFC chaos
I remember this issue first coming up in the module refactoring
debates and the three (?) RFCs that were in total created before
everybody felt happy enough about it. These were the first large RFCs
I witnessed while being kinda part of the community. Many of the
people who were involved in them talked about how stressful it had
been and I think they might also be the first time that the RFC
process, the way that the Rust project implemented it, started showing
limitations of scale.
The fact that an RFC is proposed, with no real structure or framework
on how to continue afterwards means that either feedback is chaotic
and iterations on the design can seem arbitrary, or on the other hand
some RFCs remain open for years, in limbo, where nothing really
happens on them. Both aren't great outcomes, only add to stress levels
of the people who were involved in writing them, and generally just
slows down our decision making process.
As XAMPPRocky wrote in her blog post:
> When 1.0 launched there was ~30 members of The Rust Programming
> Language, now in 2019 we have ~200 members. This is nearly 7x the
> amount of members, yet we've changed very little to be able to adapt
> to this growth.
While she was talking about how many people get paid for Rust, I feel
this is also applicable to the way that we make decisions. Many people
wrote about the RFC process for their Rust 2019 posts in rather vague
terms, including [myself][rust2019]. Well, I'm mentioning it again,
because I feel like we should try something concrete.
[rust2019]: https://spacekookie.de/blog/rust-2019-how-we-make-decisions/
## Shepherds and an RFC committee
The NixOS project has two concepts in their RFC process which I think
are valuable and that the Rust project would benefit from: RFC
Shepherds and the RFC steering committee.
The RFC steering committee is a group of between 5-6 people, assigned
for a year to oversee any new RFC, make sure that shepherds get
assigned to it, and also keep tabs on progress that is being made. Are
shepherds regularly (in whatever interval they deem appropriate)
meeting to discuss the RFC, is feedback being taken into account by
the authors, and how is the discussion generally going?
They *do not* need to actually understand where the discussion is
heading, but make sure that a discussion is happening. This would
solve the problem of RFCs remaining open for years, without getting
any further feedback and un-cluttering the PRs page of open RFCs. RFCs
that were forgotten about by their authors or that the community has
largely moved on from can be closed/ rejected. It can also give
closure to people who have written RFCs that was never rejected, but
not accepted either (again, I'm cool, don't worry).
RFC shepherds are then assigned to an RFC (3-5 people) to actually
oversee the discussion and consolidate feedback into changes that can
be made on the RFC. They are also responsible for regular (again, up
to them how regular) meetings discussing the wider implications, as
well as small details of an RFC, usually on a video call, taking notes
for people who can't attend to read up on afterwards.
**An important note here:** shepherds don't have to be part of a team that
would otherwise oversee the development of a feature (like lang or
compiler) and instead can be any community member who feels like
nominating themselves or who is nominated by someone else. The idea is
that *everybody* should be involved in overseeing incoming RFCs.
## Governance WG and a new process
Generally, I think we all know that the RFC process needs to
change. It has a bunch of problems that have lead to people physically
and mentally burning out while contributing to Rust. And, as
XAMPPRocky mentioned in her post, sustainability is important for Rust
to remain a healthy project, 5, 10 or 15 years down the line.
I haven't followed a lot of progress from the governance workinggroup,
but reading their charter and some of the proposed [RFC stages][gov]
might address some of the issues in the process. I feel that
introducing a new RFC governance body (a set of people who rotate) as
well as the concept of RFC shepherds would be beneficial to the Rust
project as a whole, and anyone who's involved in any RFC related
discussions in the future.
[gov]: http://smallcultfollowing.com/babysteps/blog/2018/06/20/proposal-for-a-staged-rfc-process/
There's some tooling issues to address as well, but I feel those are
second to the social ones.
## Distributing Rust code
Wow, yea this was supposed to be a post about Rust 2020 and my
personal roadmap. While I would love to be involved co-authoring an
RFC on changing the RFC process in the ways that I propose in this
post, there's some personal projects I want to get going as well.
At the beginning of the year I told myself that my SQL migration crate
`barrel` would see a 1.0 release by the end of the year. This is
looking less and less likely, but I want to at least get close to
it. And then, next year, there will be a 1.0. There's a bunch of
improvements to the crate itself, as well as compatibility with other
crates (such as diesel and other migration toolkits), I want to make.
There's `clap` 3.0 things that are happening although maybe those will
all be done by the end of the year. Who knows?
But mostly, I want to address a pain point in application
packaging. Over the last year I've been tricked into maintaining a
Linux distribution, NixOS. And while I'm not _that_ involved in the
development of it, there's some things that often come up with
packaging Rust applications that could and should be better.
Mostly this is about applications, written in Rust, that want to
distribute artifacts other than their binaries as well. Be that
generated man pages, default configuration, or static files for a
website. Currently this process is entirely up to the packager of an
application and relies heavily on the application in question having
good documentation. This is also a problem for _all_ Linux
distributions, not just NixOS.
Enter `cargo-dist`, a tool that can be used by a project to easily
declare exportable artifacts and provides a way to tell an external
packaging tool (such as nix, or dpkg) where to copy files to make
a complete, working application. It <del>steals</del> borrows some
concepts from autotools, using a `PREFIX` and several paths that
artifacts can be copied into. This way a Rust application can easily
be made into a package by calling `cargo dist`, which internally does
a release build, and exports required artifacts to the appropriate
places.
All of this is pretty WIP and local on my laptop right now. But I
would love to finish it soon, and see projects in 2020 adopt this as a
standard to distribute files for Rust packaging.

@ -0,0 +1,111 @@
Title: Part 1: Against Primitivism
Category: Blog
Date: 2019-11-24
Tags: culture, technology, anarchy
This is the first of two blog posts that will be slightly more
philosophical than other texts on my blog.
For some of my regular readers this thesis might not be particularly
radical, but I still feel like it warrants being said.
## What is primitivism?
I think this is the most important question to ask and one that has
many answers. Depending on who you ask, and what their political
background is, the answer might be "a joke", or even "a slur".
In simple terms, primitivism yearns to return to a simpler time,
removing technology from human lives as much as possible. This is
meant to address one of the largest sources of anguish and anxiety in
our modern society, removing it from the equation. In many places
primitivism even frames itself as revolutionary.
The problem with this analysis is that it is inherently linked with
privilege. This can take many forms. A mild form would seek to abolish
the internet, personal computers and phones, arguing that letting
people return to real-life communities will result in more happiness
and a more "natural" life.
This fails to acknowledge that these technologies are life saving for
many, giving both social outcasts and various disabled people a space
to have a community.
But most often it is not those affected who make the case for these
measures. Usually it is white, able bodied men that fail to understand
how their perception of society is skewed because of their own biases.
An even more extreme form of primitivism would reject more general
technological advancements, arguing for things to be "good" because
they are "natural". This analysis, even more so than the last, ignores
challenges that those who propose these solutions don't have to deal
with: what about medicine, what about artificial aids?
## Against the internet
It is true, that in the modern world technology has been turned
against us. Large companies control the way that people interact with
technology, track them, and more. While it is possible to live outside
of this system, only a few people actually do. Even a lot of
technologists (software developers, hackers, ...) don't fully manage
to decouple themselves from the corporately controlled tech
bubble. i.e. how many hackers use Google, Twitter, etc.
On some level it is understandable that the narrative of primitivism
has emerged. This is not to say that these ideas are in any way new,
but in a way they are making a comeback in certain leftist circles.
For someone who doesn't know how to code or has only minor technical
literacy, this fight might seem lost. Approaches like the one
previously outlined seem welcome. I feel it is important to point out
though that the demographic of people coming to this conclusion is
already skewed. More vulnerable people that are dependent on
technology have a different analytical framework and come to radically
different solutions (more on that in a future post)
It is this narrative that inspired these posts, at least in part. I
feel that to proclaim to "blow up the internet" (for example) is lazy
and counter revolutionary at it's core. It frames all conversation
about improving technology and using it in our struggles to liberate
ourselves as regressive, and somehow collaborative with an abusive
system. Suddenly instead of talking about strategy to our solutions
you are thrust to justify your work to people who misunderstand it's
basis and see it as part of the thing you are trying to fight.
## Misunderstanding technology
So what do I mean by that, and do I have an example? I'm not trying to
say that someone has to be a programmer to critique technology. I'm
arguing that the same level of engagement people would expect of
someone doing art criticism be extended to tech.
There is this notion that computers are fundamentally flawed, not
because they are fallible and replicate a human's biases, but because
of their foundational inner workings: binary! The sheere fact that
computers operate on the basic assumptions of truths and falsehoods
means that there is to assume to _be_ universal truths.
Not only are conclusions from this hypothesis often shallow and
reductionist, they also misunderstand the performative,
interpretational nature of computers. On the wire every signal is
analog. It is the translation to binary that gives them meaning. But:
this does not mean it is representative of a truth, it is merely a
projection of an assumption. The same way that axioms in mathematics
are not "truths", but rather assumptions to build discoveries on top
of.
The same can be applied to binary data: on the wire all data looks
pretty much the same. Again, it is an interpretation that turns
something into a text or a picture. There is no truth to data, only
relative perspective.
Computers are indeed fallible and as flawed as the humans using
them. But this is precicely because there is no underlying truth to
computing, only the interpretations of those who make the
instructions. This is why I argue that machines are merely an
extention to ourselves rather than any "autonomous" system.
I say "autonomous" (in quotes) systems, because it is another term
that is deeply misunderstood. But this time it is because the creators
of these systems want it to be misunderstood. This is what the next
essay will cover.

@ -0,0 +1,44 @@
Title: Another decade down the drain
Category: Blog
Tags: /dev/diary
Date: 2020-01-01
Now that the last decade is officially over, there's some things I
want to write about. I do this less so for others, and more for
myself. Around this time of year, people usually make new year's
resolutions, which is not quite what this is. Instead, I want to
reflect on what the last ten years have meant for me, and some plans I
have for the next decade.
I graduated school in 2011, started uni twice, dropped out twice (for
consistency), and found computer science and programming as my
passion. I moved to Berlin, worked many jobs, at a bunch of different
companies and finally, towards the end of this year, made myself
independent as a freelancer.
Why am I mentioning any of this? Well, I don't think it's really
possible to plan for the future, so I won't try. Instead, I want to
set myself a goal for the next ten years. Something that has nothing
to do with my career, or even a hobby that I hold right now. But let
me back up a bit…
Over 15 years ago (oof) I did an exchange program, which had me living
in France for six months. During this time I became pretty fluent in
French. Over time, unfortunately I forgot a lot of it again. Then,
over the last few years I started learning a few other languages,
apart from English and German (I consider myself bilingual). I am
conversational in French and Esperanto, and can understand a few words
of Russian.
So what's my goal for 2030? Easy: I want to learn ten languages well.
This includes becoming fluent in the languages I already know
partially (French, Esperanto, Russian), as well as learning a few new
languages entirly. Some that I am interested in are Arabic, Catalan,
Spanish, Scottish Gaelic and Kurdish. I don't know if it will be this
exact set that I will end up learning, but it's a good starting point.
Most importantly, I think that a decade is enough time to undertake
this venture. I don't know where I will be in ten years, or what I'll
be doing and neither do I think that it's ever really possible for me
to guess. But whatever my life looks lke, I hope that I'll be
speaking a lot more languages.

@ -0,0 +1,42 @@
Title: Some website design changes
Category: Blog
Date: 2020-01-03
Tags: /dev/diary, meta
Howdy, it's that time of year again where I apparently do design
things on this website (the last article is from exactly 2 years ago).
Apologies if you've never been to my website and only use the [RSS]
feed. Also: you're cool!
In previous iterations of this series I tended to fundamentally change
the way that the website worked, either by changing the way that the
html was generated, or by cutting down/ adding categories. This time
I'm doing none of that.
Indeed things are getting simpler but mostly on the CSS side of
things. I was able to delete about half of my CSS which is a pretty
cool. The biggest change in the way the website looks is the article
overview and the article pages themselves.
Generally I didn't like the card design style very much anymore so I
wanted to change that. But I also wanted to make it simpler to see
all my articles at a glance, without any summaries. I feel this makes
the blog feel more "web log-y", which I like. It also means it's now
consistently bright text on dark background and I think I've gotten
the typography down enough to make it all pretty.
Anyway, there's more things I wanted to do but those will come later.
I should also point out that my primary code host for this website
isn't github anymore. It's now hosted on [sourcehut] and
collaborations are still welcome (if you see a typo or have general
comments), albeit not with pull requests anymore. You can submit
patches to my [public inbox] which I have hinted at in a previous
article.
In post-congress news I seem to have caught a cold, so: bye.
*curl up in bed*
[RSS]: https://spacekookie.de/rss.xml
[sourcehut]: https://git.sr.ht/~spacekookie/website
[public inbox]: https://lists.sr.ht/~spacekookie/public-inbox

@ -0,0 +1,362 @@
Title: Collaborating with git-send-email
Category: Blog
Date: 2020-01-16
Tags: /dev/diary, git, email
There's is a conversation that I keep having with various people, and
while some of my thoughts are available in e-mail threads on my
[public-inbox], I felt like maybe it was time to write a blog post
about it as well.
[public-inbox]: https://lists.sr.ht/~spacekookie/public-inbox/%3C87woa41sgn.fsf%40kookie.space%3E
The reason for this is that there is documentation on the internet on
how to use git-send-email in theory, but few ever really talk about
the resulting workflow beyond a single patch.
I won't pretend that the tools couldn't use some work or that it
doesn't take a bit of getting used to, but the reward is well worth
it, and something that I feel deserves more attention.
At the end of this I will talk a bit about why I think this mode of
collaboration is good, and could potentially be better than existing
collaboration models.
## The basics
To get into the basics of sending patches by email, I recommend
[git-send-email.io], which goes into the setup of basics on various
platforms. It's one of those things where your setup will vary
slightly, depending on your OS and email hoster, and not something
that I feel needs too much more explanation.
[git-send-email.io]: https://git-send-email.io
You can go through that set of slides to send a test patch to the
project that's hosted on sourcehut to see if your setup is working
properly. This is enough to send short one-offs to projects without
having to make an account anywhere (except the e-mail you already have
anyway).
## Discussion and patches
I think one of the main advantages of git mail collaboration is that
the workflow of sending patches and creating meaningful discussion on
patches is so interlinked. While you are using different clients to
send patches and replying to feedback, the code that you send is still
available in your e-mail client. So it's easy to reply to both
feedback, while copying parts of a patch for reference.
It's important here to send e-mail as plain text, because otherwise it
might cause problems for people to reply to. There's a great website
that helps you make sure your e-mail client can and is configured to
use plain text: [useplaintext.email].
[useplaintext.email]: https://useplaintext.email/
## Patchsets and revisions
So having the basics out the way, I think it's important to discuss a
more complete workflow. When people send contribtions to projects
using pull-requests, often a set of changes will go through several
revisions before getting merged. It's also nice to quickly force push
to fix a small typo or similar without having to let that typo ever be
part of the history of the commits that get merged.
When collaborating with git over e-mail this is still possible via
"revisions". When sending a patchset, you can provide a `-v`
parameter with a number. The patches you send will then have a
revision number in them, as follows: `[PATCH v2]`. It's recommended
to send newer revisions of your patchset as a reply to the previous
one, i.e. `[PATCH]: foo` being the parent of `[PATCH v2]: foo` in the
same thread.
If you get replies to your patch, you can make changes to your
commits, then send out a new revision to the whole set, or just
individual patches, if your set of changes contains a lot of code and
you want to keep the volume of e-mails down.
The advantage of this is both that people can comment on things as
they happen in the history of the code instead of being forced to
understand a set of changes all in one go, and that you are
automatically encouraged to squash commits with messages like "small
fixes" before sending them out to a project's mailing list.
## Cover letters
One neat thing that many people also don't know about are cover
letters. Sometimes a set of changes is so large and requires some
preface to make sense, it's a good idea to write an introduction for
someone to read first. This is what GitHub pull-request descriptions
were derived from.
To generate a cover letter you need to create your patches in two
stages:
**git format-patch** to generate a series of `.patch` files that can
later be turned into e-mails. This tool takes a `--cover-letter`
paramenter that indicates to it to generate an empty patch called
`0000-cover-letter.patch`, which contains the diff-stat (git-shortlog) of
your proposed changes. You are then free to edit this file in your
favourite text editor to write a friendly introduction to your
patchset.
Another often overlooked feature here is "timely commentary", are
comments in the patch e-mail that won't be part of the patch or the
commit message itself. They can be made after the `---` marker in a
patch mail, but before the actual patch starts. This section is
usually used for the diff-stat of that particular patch.
After that you can use **git-send-email**, almost the same as before,
but instead of giving it a series of commits to send (say `HEAD~3`),
you now just say `*.patch` or wherever you saved the patch files
earlier.
You don't have to resend the cover letter every time you send a new
revision of your whole patchset. On the other hand, if things have
fundamentally changed, it might be a good idea to add one again, just
to make sure it's up to date for new people joining the thread for
feedback.
## An example
I always work well with examples and I think it's good to illustrate
how all of this can work, especially for people who might be scared by
the concept of collaborating this way.
I'm creating some patches for my `libkookie` repo and I want to get
some feedback from myself, so I decide not to push to master, which I
totally could do, but to my public-inbox instead.
There's two commits that I want some feedback on, so I make my
commits, and verify that they are indeed what I want them to be:
```
❤ libkookie> git log HEAD~2..HEAD
commit 3a147c15e998d57d9db877c9cd92d0cf04411cc9 (HEAD -> master)
Author: Katharina Fey <kookie@spacekookie.de>
Date: Wed Jan 15 21:01:06 2020 +0000
ws/kitty: setting default shell to tmux
commit d54937fa9414d87971a01dbc0dec5105b97e8f7e
Author: Katharina Fey <kookie@spacekookie.de>
Date: Wed Jan 15 20:59:40 2020 +0000
ws: adding gpg submodule by default
```
Well, perfect. This way I can also verify that the sometimes
confusing range syntax in git (`HEAD~2..HEAD`, meaning all commits
`HEAD~2`, so 2 commits ago, and `HEAD`, so now) works the way I'm
expecting it to.
I think this is quite an impressive set of changes so I decide to
reward myself with a good ol' cover letter.
```
❤ libkookie> git format-patch --cover-letter HEAD~2..HEAD
0000-cover-letter.patch
0001-ws-adding-gpg-submodule-by-default.patch
0002-ws-kitty-setting-default-shell-to-tmux.patch
```
I can go and verify the patches look okay, do a final pass over the
typos and then edit the cover letter as well:
```
From 3a147c15e998d57d9db877c9cd92d0cf04411cc9 Mon Sep 17 00:00:00 2001
From: Katharina Fey <kookie@spacekookie.de>
Date: Wed, 15 Jan 2020 21:06:37 +0000
Subject: [PATCH 0/2] The best patchset in the universe
To whom it may concearn,
I have created the most magnificent patch set in the history of the
universe and I really think you should merge it because otherwise
you'd be a git.
Cheers,
me!
Katharina Fey (2):
ws: adding gpg submodule by default
ws/kitty: setting default shell to tmux
modules/workstation/default.nix | 1 +
modules/workstation/kitty/kitty.conf | 3 ++-
2 files changed, 3 insertions(+), 1 deletion(-)
--
2.24.1
```
Perfect, they'll just love that over at spacekookie inc. I quickly
exit, save, and close the file and send off the patches:
```
❤ libkookie> git send-email --To "~spacekookie/public-inbox"@lists.sr.ht *.patch
0000-cover-letter.patch
0001-ws-adding-gpg-submodule-by-default.patch
0002-ws-kitty-setting-default-shell-to-tmux.patch
(mbox) Adding cc: Katharina Fey <kookie@spacekookie.de> from line 'From: Katharina Fey <kookie@spacekookie.de>'
From: Katharina Fey <kookie@spacekookie.de>
To: ~spacekookie/public-inbox@lists.sr.ht
Cc: Katharina Fey <kookie@spacekookie.de>
Subject: [PATCH 0/2] The best patchset in the universe
Date: Wed, 15 Jan 2020 21:10:48 +0000
Message-Id: <20200115211050.31664-1-kookie@spacekookie.de>
X-Mailer: git-send-email 2.24.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
The Cc list above has been expanded by additional
addresses found in the patch commit message. By default
send-email prompts before sending whenever this occurs.
This behavior is controlled by the sendemail.confirm
configuration setting.
For additional information, run 'git send-email --help'.
To retain the current behavior, but squelch this message,
run 'git config --global sendemail.confirm auto'.
Send this email? ([y]es|[n]o|[e]dit|[q]uit|[a]ll):
```
You can get the question about the Cc not to show up by providing
`--supress-cc all` as a parameter, but I find it useful. Basically a
Cc is just a ping, and if you're mentioning people by e-mail address
in your patchset (for example, if you have `Co-Authored-By` lines in
there) the appropriate people can be pinged for you automatically.
So, I'm happy with things as they are, so I hit "a", for all and send
off all three e-mails. (You can find them in the archive
[here][thread]).
[thread]: https://lists.sr.ht/~spacekookie/public-inbox/%3C20200115211246.1832-1-kookie@spacekookie.de%3E
I wait, drink some chocolate oat milk, and wait for a reply.
```
Katharina Fey <kookie@spacekookie.de> (0 mins. ago) (inbox unread)
Subject: Re: [PATCH 2/2] ws/kitty: setting default shell to tmux
To: ~spacekookie/public-inbox@lists.sr.ht
Date: Wed, 15 Jan 2020 21:30:23 +0000
A comment on this commit:
> --- a/modules/workstation/kitty/kitty.conf
> +++ b/modules/workstation/kitty/kitty.conf
> @@ -1,10 +1,11 @@
> font_size 10
> -font_familt twemoji-color-font
> +font_family twemoji-color-font
This was a typo before but I think we don't really want this feature
anymore, because all the font integration stuff is broken anyway. I
think it'd be better to remove this line and then add it again when it
becomes relevant again.
~k
```
What's interesting is how feedback can be layered into the patch
itself, to comment on changes that need to be made. This way it's
possible to keep track of the relevant lines of code, and also be able
to have a threaded conversation.
I guess I have a fair point here, the emoji fonts have been broken on
my computer for ages. So while I'm somewhat annoyed by having to
change things again, I can also understand why.
What I want to do now is reply with only a second revision on this one
commit because I don't know if there's more feedback coming for the
rest of the patchset. First, we need to figure out what the
`Message-Id` of the previous reply is, either via you e-mail client,
or the public mail archive of the project.
**Note**: this can sometimes be tricky, but usually you should be able
to see the "raw" message in most mail clients to find the `Message-Id`
of the e-mail you care about.
```
❤ libkookie> git send-email \
--To "~spacekookie/public-inbox"@lists.sr.ht \
--reply-to "<87r2001k7k.fsf@kookie.space>"
[...]
OK. Log says:
Sendmail: /home/.nix-profile/bin/msmtp -i ~spacekookie/public-inbox@lists.sr.ht kookie@spacekookie.de
From: Katharina Fey <kookie@spacekookie.de>
To: ~spacekookie/public-inbox@lists.sr.ht
Cc: Katharina Fey <kookie@spacekookie.de>
Subject: [PATCH] ws/kitty: setting default shell to tmux
Date: Wed, 15 Jan 2020 21:42:56 +0000
Message-Id: <20200115214256.1770-1-kookie@spacekookie.de>
X-Mailer: git-send-email 2.24.1
In-Reply-To: <87r2001k7k.fsf@kookie.space>
References: <87r2001k7k.fsf@kookie.space>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Result: OK
```
The way that the reply works mean the thread now looks somewhat like
this:
```
[PATCH 0/2] The best patchset in the universe
↳ [PATCH 1/2] ws: adding gpg submodule by default
↳ [PATCH 2/2] ws/kitty: setting default shell to tmux
↳ Re: [PATCH 2/2] ws/kitty: setting default shell to tmux
↳ [PATCH v2] ws/kitty: setting default shell to tmux
```
I wait a bit longer and I get another e-mail thanking me for my
contributions, and saying that the patches have been merged.
Sometimes it can be nice to re-generate a patchset with all the latest
versions of patches, even if they've been sent to the list before,
just to make it easier to apply them. But that's often also not
required.
## The conclusion
Hey, you made it all the way to the end of this post, congrats!
I think the way of collaborating I outlined in this post has a lot of
advantages over currently popular models (i.e. pull-requests on GitHub
or merge-requests on GitLab). People talk about wanting to
decentralise development, escaping these walled gardens that companies
have built, and they often disagree on how this can best be done.
There's even people who gladly opt into this model because they feel
that the added gamification of the platform will get people to work
more. Not only do I think that the relationship that people have with
maximising a number on a website can be abusive, but also that I've
felt better getting patches into projects via a mailing list than any
PR has ever made me feel.
I'm not gonna pretend that the tooling for all of this couldn't use
some work: git-send-email has 1000 confusing options and also getting
the `Message-Id` to reply to patches with can be hard and annoying.
In fact, I'm working on some tools to make both sending and applying
patches easier (as part of the [dev-suite] project started by my
friend Michael. I'll write more about this soon!)
[dev-suite]: https://git.sr.ht/~spacekookie/dev-suite/
In this model of development there's no need for a central service
like GitHub, no need for special software to make pull-requests
federate or even for you to host a copy of the project anywhere.
All you need is the code the project provided you, a text editor and a
mail address.

@ -0,0 +1,43 @@
Title: So there's a pandemic
Category: Blog
Date: 2020-04-02
Tags: culture, politics
(**Note:** this article was written as is, but never published. I
decided to retro-actively publish it because I felt it was important
to have it in my log of articles, but I also don't think I can really
add much more to this now since the political climate has shifted
quite hard.)
I've been meaning to publish this article a lot sooner, but here we
are. The last month has been kinda bonkers. Reading this in the
future might feel either funny or wistful. Reading this in the now
must feel redundant.
Anyway, I've been working on a few things, and also have been thinking
about a few blog posts that I've wanted to write. (One has already
gone up, so once again I'm a literary genius with getting my ordering
right). I'll try to write a bit more, and care less about each
article being as polished as previously. I feel like I've said this
before, so I'll try not to become _too_ spammy. There's still
Twitter…
In this pandemic I've seen a lot of rhetoric mirror the security
discourse post 9/11. Additional restrictions on public life are being
advocated for, because they will save lives. Checkpoints, new rules
and regulations, everybody has an opinion on how things should be
running and very frequently run to the state to enforce anything
"health officials" say might work.
Ultimately this virus is the perfect threat: you can't see it, you
don't even know if you have it, you can endanger people by moving
around freely. And there will never again be a time when "but what
about a virus" won't bu used to justify restrictions on the lives of
people.
And please don't interpret me saying these things as there being some
conspiracy to enact authoritariasism, as much as that sounds like a
novel you'd read, that's not how the world works. This isn't
plotting, it's simple fascist opportunism.

@ -0,0 +1,55 @@
Title: Rust, or: how to run a community
Category: Blog
Date: 2020-04-08
Tags: free software, culture, politics
This will very much be an off the cuff post about community building
with insights that I've seen from various communities I was a part of
in the last 10 years. None of this is to be taken as facts, and is
entirely personal opinion. I do hope however that this post might
make one or the other think about how we run communities.
Really… I'm just bored and want to write something.
The communities I've been a part of involve Rust, Fedora, Moonscript
(a lua coffee script type), libgdx (a Java game framework), Nix(OS),
and the Homeworld 2 modding community that's to blame for me learning
to code and unleashing my terrible puns on the world.
There seem to be two avenues of community organising: centralisation
and distribution. One favours building a community around a shared
name, domain, communication channel, and workflow. People work on the
same things, in the same place, under the same name. This can work
really well for smaller projects, and generally means less friction
when people from different backgrounds have to talk to each other,
because everybody is kinda in the same place. This is how most
communities get started, and how Rust was working until about one year
ago.
There is a fundamental problem with this model though: at some point
your community will become too big, and people will start to fracture
out into their little bubbles. There's no fixed point at which this
happens, but it seems to happen sooner or later for most projects.
I think as community builders it is our job to embrace these
fractures, letting sub teams spin off into their own little ventures,
while using the shared name to advertise these ventures. I think it's
okay to not have a project represent as a closed front to outsiders,
but very much embracing the fact that projects are run by many
individuals that chose to collaborate on something larger than
themselves.
The reason why I'm such a big fan of [e-mail collaboration] via
mailing lists (both for conversations and sending patches) is that it
encourages this separation. A project that forces people who
cross-collaborate to jump from tool to tool is just as centralised as
a project that only has a single communication channel. But there's
definitely examples of projects that have grown into little bubbles
that still work on a shared "product", without having to all do it in
the same place: the Linux kernel.
Whether you agree or disagree with my take on e-mails, I think we
should all be aware of the finite size that a community can have. And
at what point should we start to embrace community mitosis?
[e-mail collaboration]: https://spacekookie.de/blog/collaborating-with-git-send-email/

@ -0,0 +1,163 @@
Title: On gender, transition, and re-transition
Category: Blog
Date: 2020-06-18
Tags: /dev/diary, culture, gender
It's pride month. Which actually has nothing to do with this post,
but might have inspired me to write something. Anyway, if you're
someone who doesn't like thinking about gender, and only follow me for
my `s p i c y r u s t t a k e s`, maybe skip this one. Also, if
you're a TERF and use any of these words to harm anyone, go choke on a
brick you fucking piece of human garbage.
Cool, that's the disclaimers out the way.
## The part that's all about me
I guess this is a kind of coming out: I'm trans! I've been "on the
internet" for a while now, and have way too many people following me
than is reasonable for my boring life. And I guess a lot of you might
not know this. I've never made a point of it, and I know from
personal experience that people read my gender _very_ differently.
Some timeline stuff I guess. I'm not gonna do the "I've always been
trans" thing, because I know that's not true. That being said that I
was quite miserable before my transition for a few different reasons I
won't get into right now.
I started socially transitioning around 2011, then started HRT in
2013, and then...well, just kinda lived my life. I'm not gonna sit
here and pretend that transition wasn't the correct thing for me to
do, and I am a much happier and well-rounded person now because of it.
Why am I writing this? Because after all I've been pretty happy just
being stealth (meaning not being public about my trans-ness) for a
while now. And I'm not the kind of person who wants to share my
personal life all that much. It's weird being on the internet when
everything you say is gonna be seen by thousands of people.
Well...the reason is that my gender identity is changing. Has
changed. I guess I've always been a little butch, but in recent times
(meaning the last ~6 months or so) I've been feeling explicitly more
masculine. I've wanted to go by he/him pronouns, wear different
clothes, express myself differently in public, grow a beard (something
I've always failed at lol). It wasn't just that my idea of what being
a woman meant changed, I think fundamentally the way I related to my
gender changed.
And this is where things become really complicated.
## The part that's about society
The way that our society at large handles transgender discourse is
toxic. From the very beginning of my coming out, there's been an
inscrutible focus on "why" people are trans. There must be medical
reasons. Look at this brain scan of this one transgender lady. Her
brain looks like a cis ladies brain. This will once and for all prove
that trans people are _trapped in the wrong body_! ...
I understand why this framing has come about and stuck, because it was
a great way for somewhat liberal people to convince more conservative
people that "no actually trans people are like...real, and not just
making it up". This line of reasoning is called "trans medicalism"
and it's rooted in the idea that trans people are _scientifically_ the
gender that they say they are.
This approach has several problems. It is extremely oversimplifying,
and makes assumptions about the nature of gender that many people
would not agree with. Worse, even the people who don't really believe
in it, who only use it as a weapon against the TERFs, to defend their
own identity, end up upholding the European gender model binary, one
that would rather many non-binary and even transgender identities
didn't exist in the first place. It is a model that will never truly
accept you for being trans, only sort of tolerate you, because maybe
you're close enough to the status quo to fit in.
And a lot of trans people start internalising trans medicalism as a
survival mechanism. This is where this discourse becomes harmful.
Because not only does it prevent some trans people from actually
expressing their non-binary gender identities, now you have insecure
people who are threatened in their identity by the idea that there's
no medical truth to being trans, and bully people who don't conform to
this.
Even worse, they will sometimes align themselves with TERFs to defend
the "true trans people", before "the younger generation ruined
everything". And this is where this all comes back to me.
## Gender isn't fixed
During the last few months I've been trying to find experiences by
people similar to me, and it made me very very scared. I didn't
really know what to call myself. Because I'm by no means a cis man,
but looking for people who "de-transitioned", I found a lot of people
who were hurting, who felt they had been pressured into transition,
and who were being rallied around by TERFs who thought these poor
souls proved their bullshit points of views.
And I saw a lot of trans people yelling at any trans person even
considering "de-transitioning", as some kind of traitor. I guess I
understand why. You don't wanna be giving the TERFs more ammunition.
You don't want to undermine your own identity. Maybe you don't really
believe in it, but your self esteem is built on trans-medicalism. How
do you deal with people who de-transition?
I'm still not really sure what I would call myself, because I think
de-transitioning is the wrong term for what I would want to do. And
really, I think I'd like to think about it as just another section of
the life-long transition of my gender. To live means to change, and
my gender will change until I die. There's nothing I can do to stop
it, and I think trying to control it will inevitably fail.
I think it's also important to point out again that I regret nothing.
I'm glad I've been living as a woman for close to 10 years. I don't
know how I want to express my gender identity, or on what scale
neccessarily. Maybe I'll use different pronouns with friends, maybe
only from time to time, maybe I'll change nothing in the end because
this is all "just some phase".
But I think it brings me to the core point I want to make here:
**Stop pretending as if transitioning into a gender is the
end-all-be-all of your gender identity!**
So what if something is a phase? In my opinion it doesn't make it any
less valid. Transition is a journey, not a means to an end. And
transitioning to femme, and back to masc (MtFtM), or vice versa
doesn't make someone less trans. How can people believe that gender
is a spectrum, while not accepting that people will move around on
this spectrum?
The worst thing is: this is something I would have expected to explain
to my mum, but I didn't expect this to be such a controvertial thing
in the trans community itself.
## Why write this?
I was thinking about writing this article for at least a few weeks
now. And undoubtebly it'll be many days between writing it as a first
draft, and the finished thing on my website. I revealed a lot of
personal things in this post, things that I wouldn't otherwise want to
share.
I think ultimately I want to be a voice of support for anyone who's
feeling similarly to me: "older" trans people (I'm not even 30 lol),
who have been doing this "new gender" thing for so long that it became
normal, who might feel themselves wanting to either express themselves
in much more feminine or masculine ways than before, or at different
intensities, or more androgynously.
I think it's important that we remind ourselves that transition isn't
a means to an end, that gender is ever changing, and normalising the
idea of re-transition. And this doesn't just apply to cis people!
Trans people carry the trauma of society with them and can be just as
toxic in this matter as the TERFs.
Life is too complex for anything to remain the same forever. We all
need to become better at embracing this.

@ -0,0 +1,232 @@
Title: "The good place" vs. the ethics of society
Category: Blog
Date: 2020-09-20
Tags: culture, politics, philosophy
A few months ago I was bored and I decided to watch "The good place".
It's a show that had been introduced to me before, and I even watched
about half of the first season, before I kinda forgot about it. It
had left me feeling mostly irritated, and uninterested, and so I moved
on with my life. Up to the point where I felt _really_ bored, and I
started watching it again.
I don't really wanna talk about the show from an art criticism
perspective. It's quite fun to watch at times, the premise is quirky
and all the characters have something to set them apart that makes
them recognisable for someone who's bad at differentiating people.
But it's a comedy at it's core, and most of the "humour" left me
feeling kinda cold. It didn't so much have jokes as much as just
vague references at jokes.
Really, the show wasn't special, funny, or even bad enough for me to
really care about it too much. There was however something in the
moral text, and subtext of the show that bothered me, that I've kept
thinking about. And that's what this post is going to be about.
## Good vs Evil
The main premise of the show is centred around the idea of "good
people" vs "bad people" (the good place vs the bad place). It mirrors
heaven and hell, without putting a precise theological term on it,
because this concept has existed in various faiths throughout the
ages.
The story follows a woman who gets sent to the good place even though
she's a horrible person. Most of the first season is dedicated to
this mystery. At first she thinks this is a mistake, until it becomes
apparent, that bad people being put into a fake "good place" is part
of a weird psychological punishment system in the bad place. They are
in fact in the bad place. When they find out about this, their
memories get wiped, and it starts from the beginning, with slight
alterations. But the group figures out that they aren't in their
personal paradise again and again, and so their memories get wiped
again, and again.
The show wants to demonstrate that people can get better, seeing as a
group of "bad people" were sent to a fake "good place", and improved
as people. The permanence of "good people" and "bad people" is called
into question. Some stuff happens, and the group of four people, and
one daemon who started taking a liking to them end up on the run.
Throughout the plot it becomes apparent that the system is broken in
more subtle ways too: nobody gets to go to the good place anymore.
Nobody is good enough; too high are the standards of what counts as a
"good person". Furthermore, when they manage to get into the good
place, it becomes clear that eternal bliss with no ups and downs, and
no end in sight is just a different type of hell.
The show concludes by restructuring the system, making the "bad place"
not into a torturous nightmare, but a place where your actions and
emotions are being tested, and called into question. The idea being
that there is no such thing as a "bad person", and that everybody
could go to the "good place", if they accepted that they have flaws,
and worked on them.
They also mildly restructure the "good place" to have "an end", which
is death. Isn't that nice, everybody gets to live their perfect life
in heaven, then they die.
## Good people & bad people?
So that was the plot. As I said, I'm not gonna criticise the show for
it's scene-to-scene writing, or even the overarching plot. It mostly
tries (and manages) to be wholesome. Although it has issues
throughout, that are rooted in a very flawed understanding of
philosophy and morality.
The moral compass of the show is a character called Chidi, a professor
of moral philosophy who died and was sent to the "bad place". He was
deemed a bad person because of his indecisiveness. It is shown that
he tried to be a good person, but got too caught up in the details of
what that meant, which caused great pain to the people around him (and
which got him killed).
Throughout the show he quotes Kant a lot, with some other racist white
men from history sprinkled in there. His understanding of philosophy
isn't very deep, or nuanced. Either he was supposed to be bad at his
job, at which point the show didn't really take the time to develop
this enough to be poignant, or it just demonstrates that the show was
written by people with basically no knowledge in this field.
I argue that the way that "the good place" portrays philosophy and
moral choices in philosophical frameworks is very representative of
how our society works, and how people think about "good vs bad".
But let's back up a bit. For most of the show (if you watched it/
will) the thoughts it is trying the hardest to communicate are
"there's no bad people", "hell is a bad concept", etc. This becomes
pretty obvious. However, the larger system of afterlife remains
pretty much entirely un-examined. Why is there an afterlife, and why
do we need one, these are questions the show never asks, or attempts
to answer. Any critism against the system is phrased in a coy way,
that will lead to reform of it, not abolishment, i.e. changing what
the "good place" and "bad place" means, not their existence.
## Moral individualism
I said the show is representative of how people think about morality,
and this doesn't just start and end at "what is a good person". It
also applies to how the show deals with individualism.
What is individualism you may ask? I'm glad you did (not really, now
this post has to be longer...). Individualism is one of the axiomatic
philosophies that the western world is built on. It's the idea that
each individual is responsible for their own destiny, and identity.
Used in a (mostly) harmless way it's used to sell things to people
that can be "customised" to fit your "own personal style" (without
_really_ giving you any autonomy), whereas on a higher and more
sinister level it is used to justify the horrors of society. As an
out of context Margaret Thatcher would say "there's no such thing as
society, only people". After all, society is just men, women and
those damn enbies, that all make their own free choices, and if
society is bad, then that's just a representation of how people are
bad.
This is an over-simplification of course, but it digs at the core of
what individualism means to us. It's a way to absolve society of
guilt, up to refusing the existence of it all together. Individualism
touches many, if not all aspects of society, and it would take too
long to really examine them all here. Instead, I want to focus on
what this means for "the good place".
## There is no society in "the good place"
I don't know if the word "society" is ever used in the script, but it
is certainly not a subject of conversation in any of the episodes.
None of the characters will acknowledge that there is a human society
or what it looks like. The focus is on individuals. After all, the
fact that the world is bad is the result of just a few bad people,
that need to become better.
This is where the view that "there is no bad people" the show tries to
hammer into you falls flat. Because it's a lie.
Human society is structured in a way that a few select people at the
top have a lot of wealth and power, while the rest of us live in
varying degrees of poverty by comparison. I grew up in Germany so I'm
gonna say I don't live in luxury and peace by comparison to others,
but we _all_ suffer under the ruling class. This is a reality the
show refuses to acknowledge and it makes it's arguments about moral
philosophy feel almost dystopian.
Maybe this is controversial, but there are bad people. If it is your
job to harass homeless people, you are a bad person. If it is your
job to enforce the "war on drugs" that overwhelmingly affects black
people, you are a bad person. If you are a billionaire, you are a bad
person. You are in a position where you _could_ change society for
the better. You _could_ give all your wealth away, and actually help
people. But you don't. And no, I don't mean the fake philanthropy
that rich people indulge in because those are usually just schemes to
pay less taxes and massage on their public image. No billionaire ever
gives away so much money that they stop being a billionaire.
The ending of the good place is framed as a beautiful thing where
everybody gets to live a life in heaven in the end, if they manage to
work on themselves to become better people. And sure, there are "bad
people" like sexists and racists, and they'll just get stuck in these
tests forever that they won't escape until they become better people.
It doesn't matter how much suffering you've caused others, you get to
go to the good place if you manage to accept that you were bad.
## Why an afterlife?
So I mentioned that in the show, the existence of an afterlife is
never explained, rationalised or called into question. It exists in a
vacuum, the same way that people in it live in a vacuum.
The ending of "the good place" is framed in a way that is meant to
make you feel happy and hopeful, but all it makes me feel is wonder
why we needed to wait until the afterlife for people to deserve
happiness.
The world is an awful place because of people, sure, but it's the
system that makes people into monsters. Not only will it corrupt
people going in with good intentions, it will turn people with bad
intentions into powerful rulers.
"The good place" fails to or refuses to understand that society
exists, and portrays a moral system in which all actions are
unconnected from the bigger picture. If you were a nice person to
people in person, and generally tried to be `g o o d` then it doesn't
matter if your employees need to pee into bottles, or if your company
is burning the rainforest to ash.
Hell, the ruthless business lady in the "medium place" was sent there
because she saved someone in her _last_ moment. But the "bad things"
she did??? SHE WAS RUDE TO PEOPLE. Don't worry the exploitation
through the capitalist machine, that's all fine.
## The shape of art & paradise
To wrap up this article I want to at least mention why I'm writing
about this. Because I said earlier that I didn't find the show
special, funny, or intentionally bad enough to really engage with it.
And now here I am, writing upwards of 2000 words about it :)
The media we consume as people shapes us, and influences us in quite
profound ways. The way we tell stories is symptomatic of how society
perceives itself, and how people see themselves in society. Media
that doesn't acknowledge the existence of society then and the
suffering it brings will inevitably white-wash reality, and push this
influence on anyone consuming it.
At this point I would have liked to mention a better show or movie, or
even book, but none really come to mind. I guess it's hard to point
to any text and demand it delivers a coherent world philosophy, while
also being a story with characters and plot.
As a society we need to grow the fuck up. The stories we tell each
other of heroes and villains, the balance between good and evil
hanging in the balance, all while these actors exist outside of
anything that could be called a power hierarchy, needs to end. Only
when we grow up from this world view can we realise that paradise is
within us, and that collectively we can create it here on earth.
Not gonna lie though, trains that go through space are pretty cool.

@ -0,0 +1,12 @@
Title: A movement for autonomous technology
Category: Blog
Date: 2019-12-12
Tags: culture, politics
Status: Draft
I was recently at an art event of sorts, called T/H\X and met a bunch
of cool people working on interesting things, relating philosophy with
technology and thinking about the implications of the things we create
in the wider context of the world.
During

@ -0,0 +1,118 @@
Title: Issue trackers are garbage (and here's why)
Category: Blog
Tags: /dev/diary, dev culture,
Date: 2019-13-12
Status: Draft
(Outline)
- Tracking what needs to be done/ progress
- Giving newcomers a place to be onboarded
- Let people ask questions and report bugs
- Build a knowledge base with answers that can be searched over time
- Tracking issues don't get updated or suck up a lot of time being maintained
- Don't actually make it easy to write updates about what people are doing
- Discussions can easily be derailed
- "chat" like discussions take over spaces easily
- editing comments can be massively misleading
- How to discuss things that might be two issues at once?
> We don’t know who struck first, us or them. But we know that it was
> us that invented **issue trackers**. At the time development was very
> chaotic and it was believed that they would bring **order into the
> chaos**.
Whenever I talk to people about making FOSS projects more
approachable, one thing that always comes up is issue trackers. "Label
your issues", they say, "Mark them as 'good first issue' and
such". This is, of course, to make it easier for newcomers to see what
needs to be done, where they can help, or even just have a place to
ping for mentoring.
So far so good.
In this post I want to talk about how issue trackers are flawed and
why. Many people still insist that issue trackers are a neccessity in
the development space. This especially comes up whet discussing
decentralisation efforts for development. Many people reject the idea
that git is inherently decentralised, citing the need for an issue
tracker as a reason why.
I want to go through some examples and show why issue trackers don't
work, why they can't work, and also highlight some alternatives.
## Onboarding
I wanna talk about onboarding first because it feels like the biggest
reason people insist on having issue trackers, and might be the one
place where they're not entirly terrible.
The idea of publishing a list of things that need to be worked on,
tailored to newcomers, is a good one. It gives people an easy way to
ask for help, and getting in the mood to submit their first patches to
a project. A "generic list of things that should be done" list can
also in general be usefol to project maintainers, to prevent someone
from keeping too much state in their head at all times.
I think that any project should have something like this, but it
doesn't require an issue tracker to implement. I will talk about the
alternatives later in this post, but the same described mechanism can
be implemented using a shared collaboration pad or a mailing list.
## Tracking issues
Many people keep tracking issues in their projects to give external
people and maintainers a nice way of seeing what is being worked on
and what things need to be finished before a release can happen or a
feature is fully implemented. It also gives people the ability to
discuss things under the issue, coordinate, etc. So far so good.
The problems here are both that tracking issues are a lot of work to
maintain, and that discussions on these issues tend to derail.
Information that's posted on an issue might not be relevant anymore in
2 weeks, yet it is presented to anyone coming into the space as "what
to read next".
Some platforms will compress some comments down, but it's still
requiring someone new to the issue to read a _lot_ of stuff before
they understand the current state of affairs. And this doesn't even
address everyone using a different client or e-mail notifications.
Also, a badly maintained tracking issue is worse than useless.
Sometimes issues only link to other issues, at which point the
conversation can get split between mustiple places, making it slightly
les jaring to read. But the fact remains: long-living entities
ultimately attract bloat. Your tracking issues are informative now,
but what about in 3 months? Or 6 months?
## Asking questions/ report bugs
"What about reporting issues," I hear you say. "That's literally what
an issue is!" and...granted, sometimes they can be good this way. But
there's something to consider here. While I lumped both "reporting
issues" and "asking questions" into the same category, they are
actually wildely different things from a maintainers perspective.
They are sometimes considered one and the same, because the workflow
for a user is the same: open an issue and post some logs.
Answering questions from a maintainers perspective usually, after the
problem has been resolved, doesn't result in other issues. Sometimes
a project might want to imprve their documentation to make this
particular use-case easier to understand, but usually the software is
left unchanged.
However if a bug was found that needs to be fixed, the issue now
describing it is written from the perspective of an outsider. Even
the most detailed bug reports are not going to capture enough internal
state to easily communicate to a project maintainer why something has
happened. The logs will be a guiding first principle, yes, but there
is a lot of work required for anyone looking at the issue in the
future.
A much better approach would be to repost the issue, describing what
is so far know about it, from the perspective of a maintainer. This
might link to the ininal report, maybe adding them to CC. This would
be aimed at other contributors, and future-contributors, making it
easier for someone to pick up work on the issue in the future.

@ -0,0 +1,25 @@
Title: No, I won't work at Google
Category: Blog
Tags: ethics
Date: 2019-07-28
Status: Draft
Once in a while (about every 6-9 months or so), I get an e-mail like this in my inbox:
> Hi Katherina,
>
> Hope you are doing well. I'm a tech sourcer with Google working on the
> expansion of our Europe software engineering teams. I came across your
> linkedin and github profiles and I'm interested in finding more about your
> experience. Would you be open having a quick formal chat about our current
> teams and projects we have across Europe to see if there's anything that
> may interest you?
I always send the same polite decline, wishing them a nice day.
And a few months later, someone else will e-mail me the exact same question.
I know that a few years ago, when Google first reached out to me to ask me
if I was interested in working for them, I was quite excited about the idea.
Since then a lot of things have changed, including my attitudes towards
big companies, and Google in particular.

@ -0,0 +1,19 @@
Title: Don't fear the sieve
Category: Blog
Tags: /dev/diary, e-mail, programming
Date: 2019-02-01
Slug: understanding-sieve
Status: Draft
If you don't already know, sieve (/siːv/) is an e-mail filtering language.
It's not touring complete (i.e. it doesn't allow recursion)
and has been defined through a series of RFCs for the base language
as well as several extentions.
The RFCs aren't exactly nice to read.
But luckily, there are plenty of tutorials on the internet,
that try to explain sieve.
Unfortunately most of them are garbage.
The main reason for this is, that the articles never deal
with a realistic set of constraints of requirements

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 14 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 104 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 834 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 719 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 720 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 137 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 59 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 201 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 344 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 592 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 79 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 119 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 89 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 253 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 136 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 143 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 78 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 247 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

@ -0,0 +1 @@
../statements/archive/2020-01-27.txt

@ -0,0 +1,9 @@
Title: Keys
Template: keys
I cycled my encryption key on **2019-06-20**.
If you still have a key from before that date,
please update it from a keyserver (or from [here])!
[here]:/555F2E4B6F87F91A4110.txt

@ -0,0 +1,22 @@
Title: Impressum
Template: page
### Katharina Fey
- E-Mail: [legal@spacekookie.de](mailto:legal@spacekookie.de)
- Verantwortlich für den Inhalt nach § 55 Abs. 2 RStV
- Anschrift: Abteilung für Redundanz Abteilung c/o Katharina Fey, Margaretenstraße 30, 10317 Berlin
#### Haftung für Inhalte
Als Diensteanbieter sind wir gemäß § 7 Abs.1 TMG für eigene Inhalte auf diesen Seiten nach den allgemeinen Gesetzen verantwortlich. Nach §§ 8 bis 10 TMG sind wir als Diensteanbieter jedoch nicht verpflichtet, übermittelte oder gespeicherte fremde Informationen zu überwachen oder nach Umständen zu forschen, die auf eine rechtswidrige Tätigkeit hinweisen. Verpflichtungen zur Entfernung oder Sperrung der Nutzung von Informationen nach den allgemeinen Gesetzen bleiben hiervon unberührt. Eine diesbezügliche Haftung ist jedoch erst ab dem Zeitpunkt der Kenntnis einer konkreten Rechtsverletzung möglich. Bei Bekanntwerden von entsprechenden Rechtsverletzungen werden wir diese Inhalte umgehend entfernen.
#### Haftung für Links
Unser Angebot enthält Links zu externen Webseiten Dritter, auf deren Inhalte wir keinen Einfluss haben. Deshalb können wir für diese fremden Inhalte auch keine Gewähr übernehmen. Für die Inhalte der verlinkten Seiten ist stets der jeweilige Anbieter oder Betreiber der Seiten verantwortlich. Die verlinkten Seiten wurden zum Zeitpunkt der Verlinkung auf mögliche Rechtsverstöße überprüft. Rechtswidrige Inhalte waren zum Zeitpunkt der Verlinkung nicht erkennbar. Eine permanente inhaltliche Kontrolle der verlinkten Seiten ist jedoch ohne konkrete Anhaltspunkte einer Rechtsverletzung nicht zumutbar. Bei Bekanntwerden von Rechtsverletzungen werden wir derartige Links umgehend entfernen.
#### Urheberrecht
Die durch die Seitenbetreiber erstellten Inhalte und Werke auf diesen Seiten unterliegen dem deutschen Urheberrecht sofern nicht anders angegeben. Die Vervielfältigung, Bearbeitung, Verbreitung und jede Art der Verwertung außerhalb der Grenzen des Urheberrechtes bedürfen der schriftlichen Zustimmung des jeweiligen Autors bzw. Erstellers. Downloads und Kopien dieser Seite sind nur für den privaten, nicht kommerziellen Gebrauch gestattet. Soweit die Inhalte auf dieser Seite nicht vom Betreiber erstellt wurden, werden die Urheberrechte Dritter beachtet. Insbesondere werden Inhalte Dritter als solche gekennzeichnet. Sollten Sie trotzdem auf eine Urheberrechtsverletzung aufmerksam werden, bitten wir um einen entsprechenden Hinweis. Bei Bekanntwerden von Rechtsverletzungen werden wir derartige Inhalte umgehend entfernen.

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save