Merge commit '3d31bfb8e27a3566154da69353f013a176735246'

Update 2022-06-16
main
Katharina Fey 2 years ago
commit 3723de4b0c
Signed by: kookie
GPG Key ID: 90734A9E619C8A6C
  1. 19
      infra/libkookie/nixpkgs/stable/.editorconfig
  2. 32
      infra/libkookie/nixpkgs/stable/.git-blame-ignore-revs
  3. 185
      infra/libkookie/nixpkgs/stable/.github/CODEOWNERS
  4. 64
      infra/libkookie/nixpkgs/stable/.github/CONTRIBUTING.md
  5. 23
      infra/libkookie/nixpkgs/stable/.github/ISSUE_TEMPLATE/bug_report.md
  6. 34
      infra/libkookie/nixpkgs/stable/.github/ISSUE_TEMPLATE/build_failure.md
  7. 6
      infra/libkookie/nixpkgs/stable/.github/ISSUE_TEMPLATE/out_of_date_package_report.md
  8. 50
      infra/libkookie/nixpkgs/stable/.github/PULL_REQUEST_TEMPLATE.md
  9. 2
      infra/libkookie/nixpkgs/stable/.github/STALE-BOT.md
  10. 17
      infra/libkookie/nixpkgs/stable/.github/labeler.yml
  11. 35
      infra/libkookie/nixpkgs/stable/.github/workflows/backport.yml
  12. 26
      infra/libkookie/nixpkgs/stable/.github/workflows/basic-eval.yml
  13. 5
      infra/libkookie/nixpkgs/stable/.github/workflows/direct-push.yml
  14. 23
      infra/libkookie/nixpkgs/stable/.github/workflows/editorconfig.yml
  15. 7
      infra/libkookie/nixpkgs/stable/.github/workflows/labels.yml
  16. 9
      infra/libkookie/nixpkgs/stable/.github/workflows/manual-nixos.yml
  17. 9
      infra/libkookie/nixpkgs/stable/.github/workflows/manual-nixpkgs.yml
  18. 41
      infra/libkookie/nixpkgs/stable/.github/workflows/merge-staging.yml
  19. 26
      infra/libkookie/nixpkgs/stable/.github/workflows/nixos-manual.yml
  20. 21
      infra/libkookie/nixpkgs/stable/.github/workflows/no-channel.yml
  21. 5
      infra/libkookie/nixpkgs/stable/.github/workflows/pending-set.yml
  22. 57
      infra/libkookie/nixpkgs/stable/.github/workflows/periodic-merge-24h.yml
  23. 51
      infra/libkookie/nixpkgs/stable/.github/workflows/periodic-merge-6h.yml
  24. 134
      infra/libkookie/nixpkgs/stable/.github/workflows/rebase.yml
  25. 48
      infra/libkookie/nixpkgs/stable/.github/workflows/update-terraform-providers.yml
  26. 7
      infra/libkookie/nixpkgs/stable/.gitignore
  27. 2
      infra/libkookie/nixpkgs/stable/.version
  28. 134
      infra/libkookie/nixpkgs/stable/CONTRIBUTING.md
  29. 2
      infra/libkookie/nixpkgs/stable/COPYING
  30. 28
      infra/libkookie/nixpkgs/stable/README.md
  31. 38
      infra/libkookie/nixpkgs/stable/doc/Makefile
  32. 23
      infra/libkookie/nixpkgs/stable/doc/build-aux/pandoc-filters/docbook-reader/citerefentry-to-rst-role.lua
  33. 34
      infra/libkookie/nixpkgs/stable/doc/build-aux/pandoc-filters/docbook-writer/labelless-link-is-xref.lua
  34. 36
      infra/libkookie/nixpkgs/stable/doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua
  35. 17
      infra/libkookie/nixpkgs/stable/doc/build-aux/pandoc-filters/link-unix-man-references.lua
  36. 29
      infra/libkookie/nixpkgs/stable/doc/build-aux/pandoc-filters/myst-reader/roles.lua
  37. 25
      infra/libkookie/nixpkgs/stable/doc/build-aux/pandoc-filters/myst-writer/roles.lua
  38. 86
      infra/libkookie/nixpkgs/stable/doc/builders/fetchers.chapter.md
  39. 2
      infra/libkookie/nixpkgs/stable/doc/builders/images/appimagetools.section.md
  40. 26
      infra/libkookie/nixpkgs/stable/doc/builders/images/dockertools.section.md
  41. 6
      infra/libkookie/nixpkgs/stable/doc/builders/images/ocitools.section.md
  42. 6
      infra/libkookie/nixpkgs/stable/doc/builders/images/snaptools.section.md
  43. 6
      infra/libkookie/nixpkgs/stable/doc/builders/packages/cataclysm-dda.section.md
  44. 10
      infra/libkookie/nixpkgs/stable/doc/builders/packages/citrix.section.md
  45. 10
      infra/libkookie/nixpkgs/stable/doc/builders/packages/eclipse.section.md
  46. 4
      infra/libkookie/nixpkgs/stable/doc/builders/packages/elm.section.md
  47. 8
      infra/libkookie/nixpkgs/stable/doc/builders/packages/emacs.section.md
  48. 18
      infra/libkookie/nixpkgs/stable/doc/builders/packages/etc-files.section.md
  49. 18
      infra/libkookie/nixpkgs/stable/doc/builders/packages/firefox.section.md
  50. 2
      infra/libkookie/nixpkgs/stable/doc/builders/packages/fish.section.md
  51. 4
      infra/libkookie/nixpkgs/stable/doc/builders/packages/fuse.section.md
  52. 6
      infra/libkookie/nixpkgs/stable/doc/builders/packages/ibus.section.md
  53. 1
      infra/libkookie/nixpkgs/stable/doc/builders/packages/index.xml
  54. 22
      infra/libkookie/nixpkgs/stable/doc/builders/packages/linux.section.md
  55. 4
      infra/libkookie/nixpkgs/stable/doc/builders/packages/locales.section.md
  56. 4
      infra/libkookie/nixpkgs/stable/doc/builders/packages/nginx.section.md
  57. 6
      infra/libkookie/nixpkgs/stable/doc/builders/packages/opengl.section.md
  58. 2
      infra/libkookie/nixpkgs/stable/doc/builders/packages/shell-helpers.section.md
  59. 40
      infra/libkookie/nixpkgs/stable/doc/builders/packages/steam.section.md
  60. 8
      infra/libkookie/nixpkgs/stable/doc/builders/packages/urxvt.section.md
  61. 12
      infra/libkookie/nixpkgs/stable/doc/builders/packages/weechat.section.md
  62. 8
      infra/libkookie/nixpkgs/stable/doc/builders/packages/xorg.section.md
  63. 8
      infra/libkookie/nixpkgs/stable/doc/builders/special/fhs-environments.section.md
  64. 32
      infra/libkookie/nixpkgs/stable/doc/builders/special/mkshell.section.md
  65. 128
      infra/libkookie/nixpkgs/stable/doc/builders/testers.chapter.md
  66. 132
      infra/libkookie/nixpkgs/stable/doc/builders/trivial-builders.chapter.md
  67. 94
      infra/libkookie/nixpkgs/stable/doc/contributing/coding-conventions.chapter.md
  68. 84
      infra/libkookie/nixpkgs/stable/doc/contributing/contributing-to-documentation.chapter.md
  69. 2
      infra/libkookie/nixpkgs/stable/doc/contributing/quick-start.chapter.md
  70. 14
      infra/libkookie/nixpkgs/stable/doc/contributing/reviewing-contributions.chapter.md
  71. 53
      infra/libkookie/nixpkgs/stable/doc/contributing/submitting-changes.chapter.md
  72. 9
      infra/libkookie/nixpkgs/stable/doc/doc-support/default.nix
  73. 1
      infra/libkookie/nixpkgs/stable/doc/doc-support/lib-function-docs.nix
  74. 8
      infra/libkookie/nixpkgs/stable/doc/functions.xml
  75. 5
      infra/libkookie/nixpkgs/stable/doc/functions/debug.section.md
  76. 14
      infra/libkookie/nixpkgs/stable/doc/functions/debug.xml
  77. 56
      infra/libkookie/nixpkgs/stable/doc/functions/generators.section.md
  78. 74
      infra/libkookie/nixpkgs/stable/doc/functions/generators.xml
  79. 2
      infra/libkookie/nixpkgs/stable/doc/functions/library.xml
  80. 4
      infra/libkookie/nixpkgs/stable/doc/functions/library/attrsets.xml
  81. 49
      infra/libkookie/nixpkgs/stable/doc/functions/nix-gitignore.section.md
  82. 70
      infra/libkookie/nixpkgs/stable/doc/functions/nix-gitignore.xml
  83. 17
      infra/libkookie/nixpkgs/stable/doc/functions/prefer-remote-fetch.section.md
  84. 21
      infra/libkookie/nixpkgs/stable/doc/functions/prefer-remote-fetch.xml
  85. 10
      infra/libkookie/nixpkgs/stable/doc/hooks/index.xml
  86. 59
      infra/libkookie/nixpkgs/stable/doc/hooks/postgresql-test-hook.section.md
  87. 97
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/agda.section.md
  88. 33
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/android.section.md
  89. 196
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/beam.section.md
  90. 2
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/bower.section.md
  91. 49
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/chicken.section.md
  92. 10
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/coq.section.md
  93. 6
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/crystal.section.md
  94. 34
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/cuda.section.md
  95. 61
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/dhall.section.md
  96. 81
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/dotnet.section.md
  97. 18
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/emscripten.section.md
  98. 22
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/gnome.section.md
  99. 41
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/go.section.md
  100. 31
      infra/libkookie/nixpkgs/stable/doc/languages-frameworks/hy.section.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -55,13 +55,15 @@ trim_trailing_whitespace = unset
[*.lock]
indent_size = unset
[eggs.nix]
# trailing whitespace is an actual syntax element of classic Markdown/
# CommonMark to enforce a line break
[*.md]
trim_trailing_whitespace = unset
[nixos/modules/services/networking/ircd-hybrid/*.{conf,in}]
[eggs.nix]
trim_trailing_whitespace = unset
[nixos/tests/systemd-networkd-vrf.nix]
[nixos/modules/services/networking/ircd-hybrid/*.{conf,in}]
trim_trailing_whitespace = unset
[pkgs/build-support/dotnetenv/Wrapper/**]
@ -70,10 +72,6 @@ indent_style = unset
insert_final_newline = unset
trim_trailing_whitespace = unset
[pkgs/build-support/upstream-updater/**]
indent_style = unset
trim_trailing_whitespace = unset
[pkgs/development/compilers/elm/registry.dat]
end_of_line = unset
insert_final_newline = unset
@ -87,10 +85,3 @@ trim_trailing_whitespace = unset
[pkgs/tools/misc/timidity/timidity.cfg]
trim_trailing_whitespace = unset
[pkgs/tools/security/enpass/data.json]
insert_final_newline = unset
trim_trailing_whitespace = unset
[pkgs/top-level/emscripten-packages.nix]
trim_trailing_whitespace = unset

@ -0,0 +1,32 @@
# This file contains a list of commits that are not likely what you
# are looking for in a blame, such as mass reformatting or renaming.
# You can set this file as a default ignore file for blame by running
# the following command.
#
# $ git config blame.ignoreRevsFile .git-blame-ignore-revs
#
# To temporarily not use this file add
# --ignore-revs-file=""
# to your blame command.
#
# The ignoreRevsFile can't be set globally due to blame failing if the file isn't present.
# To not have to set the option in every repository it is needed in,
# save the following script in your path with the name "git-bblame"
# now you can run
# $ git bblame $FILE
# to use the .git-blame-ignore-revs file if it is present.
#
# #!/usr/bin/env bash
# repo_root=$(git rev-parse --show-toplevel)
# if [[ -e $repo_root/.git-blame-ignore-revs ]]; then
# git blame --ignore-revs-file="$repo_root/.git-blame-ignore-revs" $@
# else
# git blame $@
# fi
# nixos/modules/rename: Sort alphabetically
1f71224fe86605ef4cd23ed327b3da7882dad382
# nixos: fix module paths in rename.nix
d08ede042b74b8199dc748323768227b88efcf7c

@ -6,6 +6,10 @@
#
# For documentation on this file, see https://help.github.com/articles/about-codeowners/
# Mentioned users will get code review requests.
#
# IMPORTANT NOTE: in order to actually get pinged, commit access is required.
# This also holds true for GitHub teams. Since almost none of our teams have write
# permissions, you need to list all members of the team with commit access individually.
# This file
/.github/CODEOWNERS @edolstra
@ -19,7 +23,7 @@
# Libraries
/lib @edolstra @nbp @infinisil
/lib/systems @nbp @ericson2314 @matthewbauer
/lib/systems @alyssais @nbp @ericson2314 @matthewbauer
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/cli.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
@ -34,14 +38,21 @@
/pkgs/top-level/release-cross.nix @Ericson2314 @matthewbauer
/pkgs/stdenv/generic @Ericson2314 @matthewbauer
/pkgs/stdenv/cross @Ericson2314 @matthewbauer
/pkgs/build-support/cc-wrapper @Ericson2314 @orivej
/pkgs/build-support/bintools-wrapper @Ericson2314 @orivej
/pkgs/build-support/cc-wrapper @Ericson2314
/pkgs/build-support/bintools-wrapper @Ericson2314
/pkgs/build-support/setup-hooks @Ericson2314
/pkgs/build-support/setup-hooks/auto-patchelf.sh @aszlig
/pkgs/build-support/setup-hooks/auto-patchelf.sh @layus
/pkgs/build-support/setup-hooks/auto-patchelf.py @layus
# Nixpkgs build-support
/pkgs/build-support/writers @lassulus @Profpatsch
# Nixpkgs documentation
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm
/maintainers/scripts/doc @jtojnar @ryantm
/doc/build-aux/pandoc-filters @jtojnar
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar
# NixOS Internals
/nixos/default.nix @nbp @infinisil
/nixos/lib/from-env.nix @nbp @infinisil
@ -59,10 +70,17 @@
/nixos/doc/manual/development/writing-modules.xml @nbp
/nixos/doc/manual/man-nixos-option.xml @nbp
/nixos/modules/installer/tools/nixos-option.sh @nbp
/nixos/modules/system @dasJ
# NixOS integration test driver
/nixos/lib/test-driver @tfc
# Systemd
/nixos/modules/system/boot/systemd.nix @NixOS/systemd
/nixos/modules/system/boot/systemd @NixOS/systemd
/nixos/lib/systemd-*.nix @NixOS/systemd
/pkgs/os-specific/linux/systemd @NixOS/systemd
# Updaters
## update.nix
/maintainers/scripts/update.nix @jtojnar
@ -71,30 +89,31 @@
/pkgs/common-updater/scripts/update-source-version @jtojnar
# Python-related code and docs
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
/pkgs/development/tools/poetry2nix @adisbladis
/maintainers/scripts/update-python-libraries @FRidh
/pkgs/top-level/python-packages.nix @FRidh @jonringer
/pkgs/development/interpreters/python @FRidh
/pkgs/development/python-modules @FRidh @jonringer
/doc/languages-frameworks/python.section.md @FRidh
/pkgs/development/tools/poetry2nix @adisbladis
/pkgs/development/interpreters/python/hooks @FRidh @jonringer
# Haskell
/doc/languages-frameworks/haskell.section.md @cdepillabout @sternenseemann @maralorn
/maintainers/scripts/haskell @cdepillabout @sternenseemann @maralorn
/pkgs/development/compilers/ghc @cdepillabout @sternenseemann @maralorn
/pkgs/development/haskell-modules @cdepillabout @sternenseemann @maralorn
/pkgs/test/haskell @cdepillabout @sternenseemann @maralorn
/pkgs/top-level/release-haskell.nix @cdepillabout @sternenseemann @maralorn
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn
/doc/languages-frameworks/haskell.section.md @cdepillabout @sternenseemann @maralorn @expipiplus1
/maintainers/scripts/haskell @cdepillabout @sternenseemann @maralorn @expipiplus1
/pkgs/development/compilers/ghc @cdepillabout @sternenseemann @maralorn @expipiplus1
/pkgs/development/haskell-modules @cdepillabout @sternenseemann @maralorn @expipiplus1
/pkgs/test/haskell @cdepillabout @sternenseemann @maralorn @expipiplus1
/pkgs/top-level/release-haskell.nix @cdepillabout @sternenseemann @maralorn @expipiplus1
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn @expipiplus1
# Perl
/pkgs/development/interpreters/perl @volth @stigtsp
/pkgs/top-level/perl-packages.nix @volth @stigtsp
/pkgs/development/perl-modules @volth @stigtsp
/pkgs/development/interpreters/perl @stigtsp @zakame
/pkgs/top-level/perl-packages.nix @stigtsp @zakame
/pkgs/development/perl-modules @stigtsp @zakame
# R
/pkgs/applications/science/math/R @peti
/pkgs/development/r-modules @peti
/pkgs/applications/science/math/R @jbedo
/pkgs/development/r-modules @jbedo
# Ruby
/pkgs/development/interpreters/ruby @marsam
@ -102,11 +121,8 @@
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq
/pkgs/build-support/rust @andir @danieldk @zowoq
# Darwin-related
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
/pkgs/os-specific/darwin @NixOS/darwin-maintainers
/pkgs/build-support/rust @zowoq
/doc/languages-frameworks/rust.section.md @zowoq
# C compilers
/pkgs/development/compilers/gcc @matthewbauer
@ -116,14 +132,14 @@
/pkgs/top-level/unix-tools.nix @matthewbauer
/pkgs/development/tools/xcbuild @matthewbauer
# Beam-related (Erlang, Elixir, LFE, etc)
/pkgs/development/beam-modules @gleber
/pkgs/development/interpreters/erlang @gleber
/pkgs/development/interpreters/lfe @gleber
/pkgs/development/interpreters/elixir @gleber
/pkgs/development/tools/build-managers/rebar @gleber
/pkgs/development/tools/build-managers/rebar3 @gleber
/pkgs/development/tools/erlang @gleber
# Audio
/nixos/modules/services/audio/botamusique.nix @mweinelt
/nixos/modules/services/audio/snapserver.nix @mweinelt
/nixos/tests/modules/services/audio/botamusique.nix @mweinelt
/nixos/tests/snapcast.nix @mweinelt
# Browsers
/pkgs/applications/networking/browsers/firefox @mweinelt
# Jetbrains
/pkgs/applications/editors/jetbrains @edwtjo
@ -151,21 +167,39 @@
/nixos/tests/hardened.nix @joachifm
/pkgs/os-specific/linux/kernel/hardened-config.nix @joachifm
# Home Automation
/nixos/modules/services/misc/home-assistant.nix @mweinelt
/nixos/modules/services/misc/zigbee2mqtt.nix @mweinelt
/nixos/tests/home-assistant.nix @mweinelt
/nixos/tests/zigbee2mqtt.nix @mweinelt
/pkgs/servers/home-assistant @mweinelt
/pkgs/tools/misc/esphome @mweinelt
# Network Time Daemons
/pkgs/tools/networking/chrony @thoughtpolice
/pkgs/tools/networking/ntp @thoughtpolice
/pkgs/tools/networking/openntpd @thoughtpolice
/nixos/modules/services/networking/ntp @thoughtpolice
# Network
/pkgs/tools/networking/kea/default.nix @mweinelt
/pkgs/tools/networking/babeld/default.nix @mweinelt
/nixos/modules/services/networking/babeld.nix @mweinelt
/nixos/modules/services/networking/kea.nix @mweinelt
/nixos/modules/services/networking/knot.nix @mweinelt
/nixos/tests/babeld.nix @mweinelt
/nixos/tests/kea.nix @mweinelt
/nixos/tests/knot.nix @mweinelt
# Dhall
/pkgs/development/dhall-modules @Gabriel439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriel439 @Profpatsch @ehmry
/pkgs/development/dhall-modules @Gabriella439 @Profpatsch @ehmry
/pkgs/development/interpreters/dhall @Gabriella439 @Profpatsch @ehmry
# Idris
/pkgs/development/idris-modules @Infinisil
# Bazel
/pkgs/development/tools/build-managers/bazel @mboes @Profpatsch
/pkgs/development/tools/build-managers/bazel @Profpatsch
# NixOS modules for e-mail and dns services
/nixos/modules/services/mail/mailman.nix @peti
@ -174,19 +208,18 @@
/nixos/modules/services/mail/rspamd.nix @peti
# Emacs
/pkgs/applications/editors/emacs-modes @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
/pkgs/applications/editors/emacs/elisp-packages @adisbladis
/pkgs/applications/editors/emacs @adisbladis
/pkgs/top-level/emacs-packages.nix @adisbladis
# Neovim
/pkgs/applications/editors/neovim @jonringer
/pkgs/applications/editors/neovim @teto
/pkgs/applications/editors/neovim @jonringer @teto
# VimPlugins
/pkgs/misc/vim-plugins @jonringer @softinio
/pkgs/applications/editors/vim/plugins @jonringer
# VsCode Extensions
/pkgs/misc/vscode-extensions @jonringer
/pkgs/applications/editors/vscode/extensions @jonringer
# Prometheus exporter modules and tests
/nixos/modules/services/monitoring/prometheus/exporters.nix @WilliButz
@ -194,33 +227,65 @@
/nixos/tests/prometheus-exporters.nix @WilliButz
# PHP interpreter, packages, extensions, tests and documentation
/doc/languages-frameworks/php.section.md @NixOS/php
/nixos/tests/php @NixOS/php
/pkgs/build-support/build-pecl.nix @NixOS/php
/pkgs/development/interpreters/php @NixOS/php
/pkgs/development/php-packages @NixOS/php
/pkgs/top-level/php-packages.nix @NixOS/php
/doc/languages-frameworks/php.section.md @aanderse @etu @globin @ma27 @talyz
/nixos/tests/php @aanderse @etu @globin @ma27 @talyz
/pkgs/build-support/build-pecl.nix @aanderse @etu @globin @ma27 @talyz
/pkgs/development/interpreters/php @jtojnar @aanderse @etu @globin @ma27 @talyz
/pkgs/development/php-packages @aanderse @etu @globin @ma27 @talyz
/pkgs/top-level/php-packages.nix @jtojnar @aanderse @etu @globin @ma27 @talyz
# Podman, CRI-O modules and related
/nixos/modules/virtualisation/containers.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/cri-o.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/podman.nix @NixOS/podman @zowoq
/nixos/tests/cri-o.nix @NixOS/podman @zowoq
/nixos/tests/podman.nix @NixOS/podman @zowoq
/nixos/modules/virtualisation/containers.nix @zowoq @adisbladis
/nixos/modules/virtualisation/cri-o.nix @zowoq @adisbladis
/nixos/modules/virtualisation/podman @zowoq @adisbladis
/nixos/tests/cri-o.nix @zowoq @adisbladis
/nixos/tests/podman @zowoq @adisbladis
# Docker tools
/pkgs/build-support/docker @roberth @utdemir
/nixos/tests/docker-tools-overlay.nix @roberth
/nixos/tests/docker-tools.nix @roberth
/doc/builders/images/dockertools.xml @roberth
/pkgs/build-support/docker @roberth
/nixos/tests/docker-tools* @roberth
/doc/builders/images/dockertools.section.md @roberth
# Blockchains
/pkgs/applications/blockchains @mmahut @RaghavSood
# Go
/doc/languages-frameworks/go.section.md @kalbasit @Mic92 @zowoq
/pkgs/development/compilers/go @kalbasit @Mic92 @zowoq
/pkgs/development/go-modules @kalbasit @Mic92 @zowoq
/pkgs/development/go-packages @kalbasit @Mic92 @zowoq
# GNOME
/pkgs/desktops/gnome @jtojnar @hedning
/pkgs/desktops/gnome/extensions @piegamesde @jtojnar @hedning
# Cinnamon
/pkgs/desktops/cinnamon @mkg20001
# nim
/pkgs/development/compilers/nim @ehmry
/pkgs/development/nim-packages @ehmry
/pkgs/top-level/nim-packages.nix @ehmry
# terraform providers
/pkgs/applications/networking/cluster/terraform-providers @zowoq
# kubernetes
/nixos/doc/manual/configuration/kubernetes.chapter.md @zowoq
/nixos/modules/services/cluster/kubernetes @zowoq
/nixos/tests/kubernetes @zowoq
/pkgs/applications/networking/cluster/kubernetes @zowoq
# Matrix
/pkgs/servers/heisenbridge @piegamesde
/pkgs/servers/matrix-conduit @piegamesde
/pkgs/servers/matrix-synapse/matrix-appservice-irc @piegamesde
/nixos/modules/services/misc/heisenbridge.nix @piegamesde
/nixos/modules/services/misc/matrix-appservice-irc.nix @piegamesde
/nixos/modules/services/misc/matrix-conduit.nix @piegamesde
/nixos/tests/matrix-appservice-irc.nix @piegamesde
/nixos/tests/matrix-conduit.nix @piegamesde
# Dotnet
/pkgs/build-support/dotnet @IvarWithoutBones
/pkgs/development/compilers/dotnet @IvarWithoutBones

@ -1,64 +0,0 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](../COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Additional information.)
```
For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message).
Examples:
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:
* Be capitalized.
* Not start with the package name.
* Not have a period at the end.
* `meta.license` must be set and fit the upstream license.
* If there is no upstream license, `meta.license` should default to `lib.licenses.unfree`.
* `meta.maintainers` must be set.
See the nixpkgs manual for more details on [standard meta-attributes](https://nixos.org/nixpkgs/manual/#sec-standard-meta-attributes) and on how to [submit changes to nixpkgs](https://nixos.org/nixpkgs/manual/#chap-submitting-changes).
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.
## Backporting changes
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-20.09`. Do not use a _channel branch_ like `nixos-20.09` or `nixpkgs-20.09`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar. Please also ensure the commits exists on the master branch; in the case of squashed or rebased merges, the commit hash will change and the new commits can be found in the merge message at the bottom of the master pull request.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-20.09`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[20.09]`.
6. When the backport pull request is merged and you have the necessary privileges you can also replace the label `9.needs: port to stable` with `8.has: port to stable` on the original pull request. This way maintainers can keep track of missing backports easier.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

@ -7,37 +7,34 @@ assignees: ''
---
**Describe the bug**
### Describe the bug
A clear and concise description of what the bug is.
**To Reproduce**
### Steps To Reproduce
Steps to reproduce the behavior:
1. ...
2. ...
3. ...
**Expected behavior**
### Expected behavior
A clear and concise description of what you expected to happen.
**Screenshots**
### Screenshots
If applicable, add screenshots to help explain your problem.
**Additional context**
### Additional context
Add any other context about the problem here.
**Notify maintainers**
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
**Metadata**
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
Maintainer information:
```yaml
# a list of nixpkgs attributes affected by the problem
attribute:
# a list of nixos modules affected by the problem
module:
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
```

@ -0,0 +1,34 @@
---
name: Build failure
about: Create a report to help us improve
title: ''
labels: '0.kind: build failure'
assignees: ''
---
### Steps To Reproduce
Steps to reproduce the behavior:
1. build *X*
### Build log
```
log here if short otherwise a link to a gist
```
### Additional context
Add any other context about the problem here.
### Notify maintainers
<!--
Please @ people who are in the `meta.maintainers` list of the offending package or module.
If in doubt, check `git blame` for whoever last touched something.
-->
### Metadata
Please run `nix-shell -p nix-info --run "nix-info -m"` and paste the result.
```console
[user@system:~]$ nix-shell -p nix-info --run "nix-info -m"
output here
```

@ -13,10 +13,10 @@ assignees: ''
<!-- Note that these are hard requirements -->
<!--
You can use the "Go to file" functionality on github to find the package
You can use the "Go to file" functionality on GitHub to find the package
Then you can go to the history for this package
Find the latest "package_name: old_version -> new_version" commit
The "new_version" is the the current version of the package
The "new_version" is the current version of the package
-->
- [ ] Checked the [nixpkgs master branch](https://github.com/NixOS/nixpkgs)
<!--
@ -29,7 +29,7 @@ There's a high chance that you'll have the new version right away while helping
###### Project name
`nix search` name:
<!--
The current version can be found easily with the same process than above for checking the master branch
The current version can be found easily with the same process as above for checking the master branch
If an open PR is present for the package, take this version as the current one and link to the PR
-->
current version:

@ -1,27 +1,41 @@
###### Description of changes
<!--
For package updates please link to a changelog or describe changes, this helps your fellow maintainers discover breaking updates.
For new packages please briefly describe the package or provide a link to its homepage.
-->
###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- Built on platform(s)
- [ ] x86_64-linux
- [ ] aarch64-linux
- [ ] x86_64-darwin
- [ ] aarch64-darwin
- [ ] For non-Linux: Is `sandbox = true` set in `nix.conf`? (See [Nix manual](https://nixos.org/manual/nix/stable/command-ref/conf-file.html))
- [ ] Tested, as applicable:
- [NixOS test(s)](https://nixos.org/manual/nixos/unstable/index.html#sec-nixos-tests) (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- and/or [package tests](https://nixos.org/manual/nixpkgs/unstable/#sec-package-tests)
- or, for functions and "core" functionality, tests in [lib/tests](https://github.com/NixOS/nixpkgs/blob/master/lib/tests) or [pkgs/test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/test)
- made sure NixOS tests are [linked](https://nixos.org/manual/nixpkgs/unstable/#ssec-nixos-tests-linking) to the relevant packages
- [ ] Tested compilation of all packages that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"`. Note: all changes have to be committed, also see [nixpkgs-review usage](https://github.com/Mic92/nixpkgs-review#usage)
- [ ] Tested basic functionality of all binary files (usually in `./result/bin/`)
- [22.11 Release Notes (or backporting 22.05 Release notes)](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#generating-2211-release-notes)
- [ ] (Package updates) Added a release notes entry if the change is major or breaking
- [ ] (Module updates) Added a release notes entry if the change is significant
- [ ] (Module addition) Added a release notes entry if adding a new NixOS module
- [ ] (Release notes changes) Ran `nixos/doc/manual/md-to-db.sh` to update generated release notes
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md).
<!--
To help with the large amounts of pull requests, we would appreciate your
reviews of other pull requests, especially simple package updates. Just leave a
comment describing what you have tested in the relevant package/service.
Reviewing helps to reduce the average time-to-merge for everyone.
Thanks a lot if you do!
List of open PRs: https://github.com/NixOS/nixpkgs/pulls
Reviewing guidelines: https://nixos.org/manual/nixpkgs/unstable/#chap-reviewing-contributions
-->
###### Motivation for this change
###### Things done
<!-- Please check what applies. Note that these are not hard requirements but merely serve as information for reviewers. -->
- [ ] Tested using sandboxing ([nix.useSandbox](https://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `sandbox` in [`nix.conf`](https://nixos.org/nix/manual/#sec-conf-file) on non-NixOS linux)
- Built on platform(s)
- [ ] NixOS
- [ ] macOS
- [ ] other Linux distributions
- [ ] Tested via one or more NixOS test(s) if existing and applicable for the change (look inside [nixos/tests](https://github.com/NixOS/nixpkgs/blob/master/nixos/tests))
- [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nixpkgs-review --run "nixpkgs-review wip"`
- [ ] Tested execution of all binary files (usually in `./result/bin/`)
- [ ] Added a release notes entry if the change is major or breaking
- [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).

@ -3,7 +3,7 @@
- Thanks for your contribution!
- To remove the stale label, just leave a new comment.
- _How to find the right people to ping?_ &rarr; [`git blame`](https://git-scm.com/docs/git-blame) to the rescue! (or GitHub's history and blame buttons.)
- You can always ask for help on [our Discourse Forum](https://discourse.nixos.org/) or on the [#nixos IRC channel](https://webchat.freenode.net/#nixos).
- You can always ask for help on [our Discourse Forum](https://discourse.nixos.org/), [our Matrix room](https://matrix.to/#/#nix:nixos.org), or on the [#nixos IRC channel](https://web.libera.chat/#nixos).
## Suggestions for PRs

@ -5,10 +5,6 @@
- pkgs/development/libraries/agda/**/*
- pkgs/top-level/agda-packages.nix
"6.topic: bsd":
- pkgs/os-specific/bsd/**/*
- pkgs/stdenv/freebsd/**/*
"6.topic: cinnamon":
- pkgs/desktops/cinnamon/**/*
@ -16,7 +12,7 @@
- nixos/modules/services/editors/emacs.nix
- nixos/modules/services/editors/emacs.xml
- nixos/tests/emacs-daemon.nix
- pkgs/applications/editors/emacs-modes/**/*
- pkgs/applications/editors/emacs/elisp-packages/**/*
- pkgs/applications/editors/emacs/**/*
- pkgs/build-support/emacs/**/*
- pkgs/top-level/emacs-packages.nix
@ -70,6 +66,13 @@
"6.topic: nixos":
- nixos/**/*
- pkgs/os-specific/linux/nixos-rebuild/**/*
"6.topic: nim":
- doc/languages-frameworks/nim.section.md
- pkgs/development/compilers/nim/*
- pkgs/development/nim-packages/**/*
- pkgs/top-level/nim-packages.nix
"6.topic: ocaml":
- doc/languages-frameworks/ocaml.section.md
@ -135,7 +138,9 @@
"6.topic: vim":
- doc/languages-frameworks/vim.section.md
- pkgs/applications/editors/vim/**/*
- pkgs/misc/vim-plugins/**/*
- pkgs/applications/editors/vim/plugins/**/*
- nixos/modules/programs/neovim.nix
- pkgs/applications/editors/neovim/**/*
"6.topic: xfce":
- nixos/doc/manual/configuration/xfce.xml

@ -0,0 +1,35 @@
name: Backport
on:
pull_request_target:
types: [closed, labeled]
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows write access to
# the GitHub repository. This means that it should not evaluate user input in a
# way that allows code injection.
jobs:
backport:
name: Backport Pull Request
if: github.repository_owner == 'NixOS' && github.event.pull_request.merged == true && (github.event_name != 'labeled' || startsWith('backport', github.event.label.name))
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# required to find all branches
fetch-depth: 0
ref: ${{ github.event.pull_request.head.sha }}
- name: Create backport PRs
# should be kept in sync with `version`
uses: zeebe-io/backport-action@v0.0.5
with:
# Config README: https://github.com/zeebe-io/backport-action#backport-action
github_token: ${{ secrets.GITHUB_TOKEN }}
github_workspace: ${{ github.workspace }}
# should be kept in sync with `uses`
version: v0.0.5
pull_description: |-
Bot-based backport to `${target_branch}`, triggered by a label in #${pull_number}.
* [ ] Before merging, ensure that this backport complies with the [Criteria for Backporting](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#criteria-for-backporting-changes).
* Even as a non-commiter, if you find that it does not comply, leave a comment.

@ -0,0 +1,26 @@
name: Basic evaluation checks
on:
workflow_dispatch
# pull_request:
# branches:
# - master
# - release-**
# push:
# branches:
# - master
# - release-**
jobs:
tests:
runs-on: ubuntu-latest
# we don't limit this action to only NixOS repo since the checks are cheap and useful developer feedback
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v17
- uses: cachix/cachix-action@v10
with:
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
# explicit list of supportedSystems is needed until aarch64-darwin becomes part of the trunk jobset
- run: nix-build pkgs/top-level/release.nix -A tarball.nixpkgs-basic-release-checks --arg supportedSystems '[ "aarch64-darwin" "aarch64-linux" "x86_64-linux" "x86_64-darwin" ]'

@ -17,9 +17,12 @@ jobs:
run: |
ISMERGE=$(curl -H 'Accept: application/vnd.github.groot-preview+json' -H "authorization: Bearer ${{ secrets.GITHUB_TOKEN }}" https://api.github.com/repos/${{ env.GITHUB_REPOSITORY }}/commits/${{ env.GITHUB_SHA }}/pulls | jq -r '.[] | select(.merge_commit_sha == "${{ env.GITHUB_SHA }}") | any')
echo "::set-output name=ismerge::$ISMERGE"
# github events are eventually consistent, so wait until changes propagate to thier DB
- run: sleep 60
if: steps.ismerge.outputs.ismerge != 'true'
- name: Warn if the commit was a direct push
if: steps.ismerge.outputs.ismerge != 'true'
uses: peter-evans/commit-comment@v1
uses: peter-evans/commit-comment@v2
with:
body: |
@${{ github.actor }}, you pushed a commit directly to master/release branch

@ -11,36 +11,33 @@ on:
jobs:
tests:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
if: "github.repository_owner == 'NixOS' && !contains(github.event.pull_request.title, '[skip editorconfig]')"
steps:
- name: Get list of changed files from PR
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
echo 'PR_DIFF<<EOF' >> $GITHUB_ENV
gh api \
repos/NixOS/nixpkgs/pulls/${{github.event.number}}/files --paginate \
| jq '.[] | select(.status != "removed") | .filename' \
>> $GITHUB_ENV
echo 'EOF' >> $GITHUB_ENV
- uses: actions/checkout@v2
> "$HOME/changed_files"
- name: print list of changed files
run: |
cat "$HOME/changed_files"
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
if: env.PR_DIFF
- uses: cachix/install-nix-action@v13
if: env.PR_DIFF
- uses: cachix/install-nix-action@v17
with:
# nixpkgs commit is pinned so that it doesn't break
nix_path: nixpkgs=https://github.com/NixOS/nixpkgs/archive/f93ecc4f6bc60414d8b73dbdf615ceb6a2c604df.tar.gz
# editorconfig-checker 2.4.0
nix_path: nixpkgs=https://github.com/NixOS/nixpkgs/archive/c473cc8714710179df205b153f4e9fa007107ff9.tar.gz
- name: install editorconfig-checker
run: nix-env -iA editorconfig-checker -f '<nixpkgs>'
if: env.PR_DIFF
- name: Checking EditorConfig
if: env.PR_DIFF
run: |
echo "$PR_DIFF" | xargs editorconfig-checker -disable-indent-size
cat "$HOME/changed_files" | xargs -r editorconfig-checker -disable-indent-size
- if: ${{ failure() }}
run: |
echo "::error :: Hey! It looks like your changes don't follow our editorconfig settings. Read https://editorconfig.org/#download to configure your editor so you never see this error again."

@ -4,6 +4,11 @@ on:
pull_request_target:
types: [edited, opened, synchronize, reopened]
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows some write
# access to the GitHub API. This means that it should not evaluate user input in
# a way that allows code injection.
permissions:
contents: read
pull-requests: write
@ -13,7 +18,7 @@ jobs:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/labeler@v3
- uses: actions/labeler@v4
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
sync-labels: true

@ -12,18 +12,19 @@ on:
jobs:
nixos:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v13
- uses: cachix/install-nix-action@v17
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v9
- uses: cachix/cachix-action@v10
with:
# This cache is for the nixos/nixpkgs manual builds and should not be trusted or used elsewhere.
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building NixOS manual

@ -12,18 +12,19 @@ on:
jobs:
nixpkgs:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v13
- uses: cachix/install-nix-action@v17
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true
- uses: cachix/cachix-action@v9
- uses: cachix/cachix-action@v10
with:
# This cache is for the nixos/nixpkgs manual builds and should not be trusted or used elsewhere.
# This cache is for the nixpkgs repo checks and should not be trusted or used elsewhere.
name: nixpkgs-ci
signingKey: '${{ secrets.CACHIX_SIGNING_KEY }}'
- name: Building Nixpkgs manual

@ -1,41 +0,0 @@
name: "merge staging(-next)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 6 hours
- cron: '0 */6 * * *'
jobs:
sync-branch:
if: github.repository == 'NixOS/nixpkgs'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Merge master into staging-next
id: staging_next
uses: devmasx/merge-branch@v1.3.1
with:
type: now
from_branch: master
target_branch: staging-next
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Merge staging-next into staging
id: staging
uses: devmasx/merge-branch@v1.3.1
with:
type: now
from_branch: staging-next
target_branch: staging
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v1
if: ${{ failure() }}
with:
issue-number: 105153
body: |
An automatic merge${{ (steps.staging_next.outcome == 'failure' && ' from master to staging-next') || ((steps.staging.outcome == 'failure' && ' from staging-next to staging') || '') }} [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

@ -0,0 +1,26 @@
name: NixOS manual checks
permissions: read-all
on:
pull_request_target:
branches-ignore:
- 'release-**'
paths:
- 'nixos/**/*.xml'
- 'nixos/**/*.md'
jobs:
tests:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS'
steps:
- uses: actions/checkout@v3
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v17
- name: Check DocBook files generated from Markdown are consistent
run: |
nixos/doc/manual/md-to-db.sh
git diff --exit-code

@ -0,0 +1,21 @@
name: "No channel PR"
on:
pull_request:
branches:
- 'nixos-**'
- 'nixpkgs-**'
jobs:
fail:
name: "This PR is is targeting a channel branch"
runs-on: ubuntu-latest
steps:
- run: |
cat <<EOF
The nixos-* and nixpkgs-* branches are pushed to by the channel
release script and should not be merged into directly.
Please target the equivalent release-* branch or master instead.
EOF
exit 1

@ -3,6 +3,11 @@ name: "set pending status"
on:
pull_request_target:
# WARNING:
# When extending this action, be aware that $GITHUB_TOKEN allows write access to
# the GitHub repository. This means that it should not evaluate user input in a
# way that allows code injection.
jobs:
action:
runs-on: ubuntu-latest

@ -0,0 +1,57 @@
# This action periodically merges base branches into staging branches.
# This is done to
# * prevent conflicts or rather resolve them early
# * make all potential breakage happen on the staging branch
# * and make sure that all major rebuilds happen before the staging
# branch get’s merged back into its base branch.
name: "Periodic Merges (24h)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 24 hours
- cron: '0 0 * * *'
jobs:
periodic-merge:
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
strategy:
# don't fail fast, so that all pairs are tried
fail-fast: false
# certain branches need to be merged in order, like master->staging-next->staging
# and disabling parallelism ensures the order of the pairs below.
max-parallel: 1
matrix:
pairs:
- from: master
into: haskell-updates
- from: release-21.11
into: staging-next-21.11
- from: staging-next-21.11
into: staging-21.11
- from: release-22.05
into: staging-next-22.05
- from: staging-next-22.05
into: staging-22.05
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v3
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}
target_branch: ${{ matrix.pairs.into }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 105153
body: |
Periodic merge from `${{ matrix.pairs.from }}` into `${{ matrix.pairs.into }}` has [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

@ -0,0 +1,51 @@
# This action periodically merges base branches into staging branches.
# This is done to
# * prevent conflicts or rather resolve them early
# * make all potential breakage happen on the staging branch
# * and make sure that all major rebuilds happen before the staging
# branch get’s merged back into its base branch.
name: "Periodic Merges (6h)"
on:
schedule:
# * is a special character in YAML so you have to quote this string
# Merge every 6 hours
- cron: '0 */6 * * *'
jobs:
periodic-merge:
if: github.repository_owner == 'NixOS'
runs-on: ubuntu-latest
strategy:
# don't fail fast, so that all pairs are tried
fail-fast: false
# certain branches need to be merged in order, like master->staging-next->staging
# and disabling parallelism ensures the order of the pairs below.
max-parallel: 1
matrix:
pairs:
- from: master
into: staging-next
- from: staging-next
into: staging
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v3
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}
target_branch: ${{ matrix.pairs.into }}
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: Comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 105153
body: |
Periodic merge from `${{ matrix.pairs.from }}` into `${{ matrix.pairs.into }}` has [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

@ -1,134 +0,0 @@
on:
issue_comment:
types:
- created
# This action allows people with write access to the repo to rebase a PRs base branch
# by commenting `/rebase ${branch}` on the PR while avoiding CODEOWNER notifications.
jobs:
rebase:
runs-on: ubuntu-latest
if: github.repository_owner == 'NixOS' && github.event.issue.pull_request != '' && contains(github.event.comment.body, '/rebase')
steps:
- uses: peter-evans/create-or-update-comment@v1
with:
comment-id: ${{ github.event.comment.id }}
reactions: eyes
- uses: scherermichael-oss/action-has-permission@1.0.6
id: check-write-access
with:
required-permission: write
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: check permissions
run: |
echo "Commenter doesn't have write access to the repo"
exit 1
if: "! steps.check-write-access.outputs.has-permission"
- name: setup
run: |
curl "https://api.github.com/repos/${{ github.repository }}/pulls/${{ github.event.issue.number }}" 2>/dev/null >pr.json
cat <<EOF >>"$GITHUB_ENV"
CAN_MODIFY=$(jq -r '.maintainer_can_modify' pr.json)
COMMITS=$(jq -r '.commits' pr.json)
CURRENT_BASE=$(jq -r '.base.ref' pr.json)
PR_BRANCH=$(jq -r '.head.ref' pr.json)
COMMENT_BRANCH=$(echo ${{ github.event.comment.body }} | awk "/^\/rebase / {print \$2}")
PULL_REQUEST=${{ github.event.issue.number }}
EOF
rm pr.json
- name: check branch
env:
PERMANENT_BRANCHES: "haskell-updates|master|nixos|nixpkgs|python-unstable|release|staging"
VALID_BRANCHES: "haskell-updates|master|python-unstable|release-20.09|staging|staging-20.09|staging-next"
run: |
message() {
cat <<EOF
Can't rebase $PR_BRANCH from $CURRENT_BASE onto $COMMENT_BRANCH (PR:$PULL_REQUEST COMMITS:$COMMITS)
EOF
}
if ! [[ "$COMMENT_BRANCH" =~ ^($VALID_BRANCHES)$ ]]; then
cat <<EOF
Check that the branch from the comment is valid:
$(message)
This action can only rebase onto these branches:
$VALID_BRANCHES
\`/rebase \${branch}\` must be at the start of the line
EOF
exit 1
fi
if [[ "$COMMENT_BRANCH" == "$CURRENT_BASE" ]]; then
cat <<EOF
Check that the branch from the comment isn't the current base branch:
$(message)
EOF
exit 1
fi
if [[ "$COMMENT_BRANCH" == "$PR_BRANCH" ]]; then
cat <<EOF
Check that the branch from the comment isn't the current branch:
$(message)
EOF
exit 1
fi
if [[ "$PR_BRANCH" =~ ^($PERMANENT_BRANCHES) ]]; then
cat <<EOF
Check that the PR branch isn't a permanent branch:
$(message)
EOF
exit 1
fi
if [[ "$CAN_MODIFY" != "true" ]]; then
cat <<EOF
Check that maintainers can edit the PR branch:
$(message)
EOF
exit 1
fi
- uses: actions/checkout@v2
with:
fetch-depth: 0
- name: rebase pull request
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
git config --global user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config --global user.name "github-actions[bot]"
git fetch origin
gh pr checkout "$PULL_REQUEST"
git rebase \
--onto="$(git merge-base origin/"$CURRENT_BASE" origin/"$COMMENT_BRANCH")" \
"HEAD~$COMMITS"
git push --force
curl \
-X POST \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d "{ \"base\": \"$COMMENT_BRANCH\" }" \
"https://api.github.com/repos/${{ github.repository }}/pulls/$PULL_REQUEST"
curl \
-X PATCH \
-H "Accept: application/vnd.github.v3+json" \
-H "Authorization: token $GITHUB_TOKEN" \
-d '{ "state": "closed" }' \
"https://api.github.com/repos/${{ github.repository }}/pulls/$PULL_REQUEST"
- uses: peter-evans/create-or-update-comment@v1
with:
issue-number: ${{ github.event.issue.number }}
body: |
Rebased, please reopen the pull request to restart CI
- uses: peter-evans/create-or-update-comment@v1
if: failure()
with:
issue-number: ${{ github.event.issue.number }}
body: |
[Failed to rebase](https://github.com/${{ github.repository }}/actions/runs/${{ github.run_id }})

@ -0,0 +1,48 @@
name: "Update terraform-providers"
on:
schedule:
- cron: "14 3 * * 1"
workflow_dispatch:
jobs:
tf-providers:
if: github.repository_owner == 'NixOS' && github.ref == 'refs/heads/master' # ensure workflow_dispatch only runs on master
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: cachix/install-nix-action@v17
- name: setup
id: setup
run: |
echo ::set-output name=title::"terraform-providers: update $(date -u +"%Y-%m-%d")"
- name: update terraform-providers
run: |
git config user.email "41898282+github-actions[bot]@users.noreply.github.com"
git config user.name "github-actions[bot]"
pushd pkgs/applications/networking/cluster/terraform-providers
./update-all-providers --no-build
git commit -m "${{ steps.setup.outputs.title }}" providers.json
popd
- name: create PR
uses: peter-evans/create-pull-request@v4
with:
body: |
Automatic update by [update-terraform-providers](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/update-terraform-providers.yml) action.
Check that all providers build with:
```
@ofborg build terraform-full
```
branch: terraform-providers-update
delete-branch: false
labels: "2.status: work-in-progress"
title: ${{ steps.setup.outputs.title }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: comment on failure
uses: peter-evans/create-or-update-comment@v2
if: ${{ failure() }}
with:
issue-number: 153416
body: |
Automatic update of terraform providers [failed](https://github.com/NixOS/nixpkgs/actions/runs/${{ github.run_id }}).

@ -2,8 +2,12 @@
,*
.*.swp
.*.swo
.idea/
.vscode/
outputs/
result
result-*
source/
/doc/NEWS.html
/doc/NEWS.txt
/doc/manual.html
@ -20,3 +24,6 @@ __pycache__
# generated by pkgs/common-updater/update-script.nix
update-git-commits.txt
# JetBrains IDEA module declaration file
/nixpkgs.iml

@ -0,0 +1,134 @@
# How to contribute
Note: contributing implies licensing those contributions
under the terms of [COPYING](COPYING), which is an MIT-like license.
## Opening issues
* Make sure you have a [GitHub account](https://github.com/signup/free)
* Make sure there is no open issue on the topic
* [Submit a new issue](https://github.com/NixOS/nixpkgs/issues/new/choose) by choosing the kind of topic and fill out the template
## Submitting changes
Read the ["Submitting changes"](https://nixos.org/nixpkgs/manual/#chap-submitting-changes) section of the nixpkgs manual. It explains how to write, test, and iterate on your change, and which branch to base your pull request against.
Below is a short excerpt of some points in there:
* Format the commit messages in the following way:
```
(pkg-name | nixos/<module>): (from -> to | init at version | refactor | etc)
(Motivation for change. Link to release notes. Additional information.)
```
For consistency, there should not be a period at the end of the commit message's summary line (the first line of the commit message).
Examples:
* nginx: init at 2.0.1
* firefox: 54.0.1 -> 55.0
https://www.mozilla.org/en-US/firefox/55.0/releasenotes/
* nixos/hydra: add bazBaz option
Dual baz behavior is needed to do foo.
* nixos/nginx: refactor config generation
The old config generation system used impure shell scripts and could break in specific circumstances (see #1234).
* `meta.description` should:
* Be capitalized.
* Not start with the package name.
* Not have a period at the end.
* `meta.license` must be set and fit the upstream license.
* If there is no upstream license, `meta.license` should default to `lib.licenses.unfree`.
* `meta.maintainers` must be set.
See the nixpkgs manual for more details on [standard meta-attributes](https://nixos.org/nixpkgs/manual/#sec-standard-meta-attributes).
## Writing good commit messages
In addition to writing properly formatted commit messages, it's important to include relevant information so other developers can later understand *why* a change was made. While this information usually can be found by digging code, mailing list/Discourse archives, pull request discussions or upstream changes, it may require a lot of work.
For package version upgrades and such a one-line commit message is usually sufficient.
## Rebasing between branches (i.e. from master to staging)
From time to time, changes between branches must be rebased, for example, if the
number of new rebuilds they would cause is too large for the target branch. When
rebasing, care must be taken to include only the intended changes, otherwise
many CODEOWNERS will be inadvertently requested for review. To achieve this,
rebasing should not be performed directly on the target branch, but on the merge
base between the current and target branch.
In the following example, we see a rebase from `master` onto the merge base
between `master` and `staging`, so that a change can eventually be retargeted to
`staging`. The example uses `upstream` as the remote for `NixOS/nixpkgs.git`
while the `origin` remote is used for the remote you are pushing to.
```console
# Find the common base between two branches
common=$(git merge-base upstream/master upstream/staging)
# Find the common base between your feature branch and master
commits=$(git merge-base $(git branch --show-current) upstream/master)
# Rebase all commits onto the common base
git rebase --onto=$common $commits
# Force push your changes
git push origin $(git branch --show-current) --force-with-lease
```
Then change the base branch in the GitHub PR using the *Edit* button in the upper
right corner, and switch from `master` to `staging`. After the PR has been
retargeted it might be necessary to do a final rebase onto the target branch, to
resolve any outstanding merge conflicts.
```console
# Rebase onto target branch
git rebase upstream/staging
# Review and fixup possible conflicts
git status
# Force push your changes
git push origin $(git branch --show-current) --force-with-lease
```
## Backporting changes
Follow these steps to backport a change into a release branch in compliance with the [commit policy](https://nixos.org/nixpkgs/manual/#submitting-changes-stable-release-branches).
You can add a label such as `backport release-22.05` to a PR, so that merging it will
automatically create a backport (via [a GitHub Action](.github/workflows/backport.yml)).
This also works for PR's that have already been merged, and might take a couple of minutes to trigger.
You can also create the backport manually:
1. Take note of the commits in which the change was introduced into `master` branch.
2. Check out the target _release branch_, e.g. `release-22.05`. Do not use a _channel branch_ like `nixos-22.05` or `nixpkgs-22.05-darwin`.
3. Create a branch for your change, e.g. `git checkout -b backport`.
4. When the reason to backport is not obvious from the original commit message, use `git cherry-pick -xe <original commit>` and add a reason. Otherwise use `git cherry-pick -x <original commit>`. That's fine for minor version updates that only include security and bug fixes, commits that fixes an otherwise broken package or similar. Please also ensure the commits exists on the master branch; in the case of squashed or rebased merges, the commit hash will change and the new commits can be found in the merge message at the bottom of the master pull request.
5. Push to GitHub and open a backport pull request. Make sure to select the release branch (e.g. `release-22.05`) as the target branch of the pull request, and link to the pull request in which the original change was comitted to `master`. The pull request title should be the commit title with the release version as prefix, e.g. `[22.05]`.
6. When the backport pull request is merged and you have the necessary privileges you can also replace the label `9.needs: port to stable` with `8.has: port to stable` on the original pull request. This way maintainers can keep track of missing backports easier.
## Criteria for Backporting changes
Anything that does not cause user or downstream dependency regressions can be backported. This includes:
- New Packages / Modules
- Security / Patch updates
- Version updates which include new functionality (but no breaking changes)
- Services which require a client to be up-to-date regardless. (E.g. `spotify`, `steam`, or `discord`)
- Security critical applications (E.g. `firefox`)
## Generating 22.11 Release Notes
Documentation in nixpkgs is transitioning to a markdown-centric workflow. Release notes now require a translation step to convert from markdown to a compatible docbook document.
Steps for updating 22.11 Release notes:
1. Edit `nixos/doc/manual/release-notes/rl-2211.section.md` with the desired changes
2. Run `./nixos/doc/manual/md-to-db.sh` to render `nixos/doc/manual/from_md/release-notes/rl-2211.section.xml`
3. Include changes to `rl-2211.section.md` and `rl-2211.section.xml` in the same commit.
## Reviewing contributions
See the nixpkgs manual for more details on how to [Review contributions](https://nixos.org/nixpkgs/manual/#chap-reviewing-contributions).

@ -1,4 +1,4 @@
Copyright (c) 2003-2021 Eelco Dolstra and the Nixpkgs/NixOS contributors
Copyright (c) 2003-2022 Eelco Dolstra and the Nixpkgs/NixOS contributors
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the

@ -1,14 +1,19 @@
<p align="center">
<a href="https://nixos.org/nixos"><img src="https://nixos.org/logo/nixos-hires.png" width="500px" alt="NixOS logo" /></a>
<a href="https://nixos.org#gh-light-mode-only">
<img src="https://raw.githubusercontent.com/NixOS/nixos-homepage/master/logo/nixos-hires.png" width="500px" alt="NixOS logo"/>
</a>
<a href="https://nixos.org#gh-dark-mode-only">
<img src="https://raw.githubusercontent.com/NixOS/nixos-artwork/master/logo/nixos-white.png" width="500px" alt="NixOS logo"/>
</a>
</p>
<p align="center">
<a href="https://www.codetriage.com/nixos/nixpkgs"><img src="https://www.codetriage.com/nixos/nixpkgs/badges/users.svg" alt="Code Triagers badge" /></a>
<a href="https://opencollective.com/nixos"><img src="https://opencollective.com/nixos/tiers/supporter/badge.svg?label=Supporter&color=brightgreen" alt="Open Collective supporters" /></a>
<a href="https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md"><img src="https://img.shields.io/github/contributors-anon/NixOS/nixpkgs" alt="Contributors badge" /></a>
<a href="https://opencollective.com/nixos"><img src="https://opencollective.com/nixos/tiers/supporter/badge.svg?label=supporters&color=brightgreen" alt="Open Collective supporters" /></a>
</p>
[Nixpkgs](https://github.com/nixos/nixpkgs) is a collection of over
60,000 software packages that can be installed with the
80,000 software packages that can be installed with the
[Nix](https://nixos.org/nix/) package manager. It also implements
[NixOS](https://nixos.org/nixos/), a purely-functional Linux distribution.
@ -21,10 +26,10 @@
# Community
* [Discourse Forum](https://discourse.nixos.org/)
* [IRC - #nixos on freenode.net](irc://irc.freenode.net/#nixos)
* [Matrix Chat](https://matrix.to/#/#community:nixos.org)
* [NixOS Weekly](https://weekly.nixos.org/)
* [Community-maintained wiki](https://nixos.wiki/)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Matrix, Telegram, other IRC channels, etc.)
* [Community-maintained list of ways to get in touch](https://nixos.wiki/wiki/Get_In_Touch#Chat) (Discord, Telegram, IRC, etc.)
# Other Project Repositories
@ -46,14 +51,14 @@ Nixpkgs and NixOS are built and tested by our continuous integration
system, [Hydra](https://hydra.nixos.org/).
* [Continuous package builds for unstable/master](https://hydra.nixos.org/jobset/nixos/trunk-combined)
* [Continuous package builds for the NixOS 20.09 release](https://hydra.nixos.org/jobset/nixos/release-20.09)
* [Continuous package builds for the NixOS 22.05 release](https://hydra.nixos.org/jobset/nixos/release-22.05)
* [Tests for unstable/master](https://hydra.nixos.org/job/nixos/trunk-combined/tested#tabs-constituents)
* [Tests for the NixOS 20.09 release](https://hydra.nixos.org/job/nixos/release-20.09/tested#tabs-constituents)
* [Tests for the NixOS 22.05 release](https://hydra.nixos.org/job/nixos/release-22.05/tested#tabs-constituents)
Artifacts successfully built with Hydra are published to cache at
https://cache.nixos.org/. When successful build and test criteria are
met, the Nixpkgs expressions are distributed via [Nix
channels](https://nixos.org/nix/manual/#sec-channels).
channels](https://nixos.org/manual/nix/stable/package-management/channels.html).
# Contributing
@ -87,7 +92,7 @@ Most contributions are based on and merged into these branches:
deemed of sufficiently high quality
For more information about contributing to the project, please visit
the [contributing page](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md).
the [contributing page](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md).
# Donations
@ -97,7 +102,8 @@ Foundation](https://nixos.org/nixos/foundation.html). To ensure the
continuity and expansion of the NixOS infrastructure, we are looking
for donations to our organization.
You can donate to the NixOS foundation by using Open Collective:
You can donate to the NixOS foundation through [SEPA bank
transfers](https://nixos.org/donate.html) or by using Open Collective:
<a href="https://opencollective.com/nixos#support"><img src="https://opencollective.com/nixos/tiers/supporter.svg?width=890" /></a>

@ -1,5 +1,21 @@
MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$' -not -name README.md)))
PANDOC ?= pandoc
pandoc_media_dir = media
# NOTE: Keep in sync with NixOS manual (/nixos/doc/manual/md-to-db.sh) and conversion script (/maintainers/scripts/db-to-md.sh).
# TODO: Remove raw-attribute when we can get rid of DocBook altogether.
pandoc_commonmark_enabled_extensions = +attributes+fenced_divs+footnotes+bracketed_spans+definition_lists+pipe_tables+raw_attribute
# Not needed:
# - docbook-reader/citerefentry-to-rst-role.lua (only relevant for DocBook → MarkDown/rST/MyST)
pandoc_flags = --extract-media=$(pandoc_media_dir) \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
--lua-filter=build-aux/pandoc-filters/myst-reader/roles.lua \
--lua-filter=build-aux/pandoc-filters/link-unix-man-references.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/labelless-link-is-xref.lua \
-f commonmark$(pandoc_commonmark_enabled_extensions)+smart
.PHONY: all
all: validate format out/html/index.html out/epub/manual.epub
@ -22,7 +38,7 @@ fix-misc-xml:
.PHONY: clean
clean:
rm -f ${MD_TARGETS} doc-support/result .version manual-full.xml functions/library/locations.xml functions/library/generated
rm -rf ./out/ ./highlightjs
rm -rf ./out/ ./highlightjs ./media
.PHONY: validate
validate: manual-full.xml doc-support/result
@ -39,7 +55,7 @@ out/html/index.html: doc-support/result manual-full.xml style.css highlightjs
mkdir -p out/html/highlightjs/
cp -r highlightjs out/html/
cp -r media out/html/
cp -r $(pandoc_media_dir) out/html/
cp ./overrides.css out/html/
cp ./style.css out/html/style.css
@ -54,7 +70,7 @@ out/epub/manual.epub: manual-full.xml
doc-support/result/epub.xsl \
./manual-full.xml
cp -r media out/epub/scratch/OEBPS
cp -r $(pandoc_media_dir) out/epub/scratch/OEBPS
cp ./overrides.css out/epub/scratch/OEBPS
cp ./style.css out/epub/scratch/OEBPS
mkdir -p out/epub/scratch/OEBPS/images/callouts/
@ -89,16 +105,12 @@ functions/library/generated: doc-support/result
ln -rfs ./doc-support/result/function-docs functions/library/generated
%.section.xml: %.section.md
pandoc $^ -t docbook \
--extract-media=media \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
-f markdown+smart \
| cat > $@
$(PANDOC) $^ -t docbook \
$(pandoc_flags) \
-o $@
%.chapter.xml: %.chapter.md
pandoc $^ -t docbook \
$(PANDOC) $^ -t docbook \
--top-level-division=chapter \
--extract-media=media \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
-f markdown+smart \
| cat > $@
$(pandoc_flags) \
-o $@

@ -0,0 +1,23 @@
--[[
Converts Code AST nodes produced by pandocs DocBook reader
from citerefentry elements into AST for corresponding role
for reStructuredText.
We use subset of MyST syntax (CommonMark with features from rST)
so lets use the rST AST for rST features.
Reference: https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage
]]
function Code(elem)
elem.classes = elem.classes:map(function (x)
if x == 'citerefentry' then
elem.attributes['role'] = 'manpage'
return 'interpreted-text'
else
return x
end
end)
return elem
end

@ -0,0 +1,34 @@
--[[
Converts Link AST nodes with empty label to DocBook xref elements.
This is a temporary script to be able use cross-references conveniently
using syntax taken from MyST, while we still use docbook-xsl
for generating the documentation.
Reference: https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing
]]
local function starts_with(start, str)
return str:sub(1, #start) == start
end
local function escape_xml_arg(arg)
amps = arg:gsub('&', '&amp;')
amps_quotes = amps:gsub('"', '&quot;')
amps_quotes_lt = amps_quotes:gsub('<', '&lt;')
return amps_quotes_lt
end
function Link(elem)
has_no_content = #elem.content == 0
targets_anchor = starts_with('#', elem.target)
has_no_attributes = elem.title == '' and elem.identifier == '' and #elem.classes == 0 and #elem.attributes == 0
if has_no_content and targets_anchor and has_no_attributes then
-- xref expects idref without the pound-sign
target_without_hash = elem.target:sub(2, #elem.target)
return pandoc.RawInline('docbook', '<xref linkend="' .. escape_xml_arg(target_without_hash) .. '" />')
end
end

@ -0,0 +1,36 @@
--[[
Converts AST for reStructuredText roles into corresponding
DocBook elements.
Currently, only a subset of roles is supported.
Reference:
List of roles:
https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html
manpage:
https://tdg.docbook.org/tdg/5.1/citerefentry.html
file:
https://tdg.docbook.org/tdg/5.1/filename.html
]]
function Code(elem)
if elem.classes:includes('interpreted-text') then
local tag = nil
local content = elem.text
if elem.attributes['role'] == 'manpage' then
tag = 'citerefentry'
local title, volnum = content:match('^(.+)%((%w+)%)$')
if title == nil then
-- No volnum in parentheses.
title = content
end
content = '<refentrytitle>' .. title .. '</refentrytitle>' .. (volnum ~= nil and ('<manvolnum>' .. volnum .. '</manvolnum>') or '')
elseif elem.attributes['role'] == 'file' then
tag = 'filename'
end
if tag ~= nil then
return pandoc.RawInline('docbook', '<' .. tag .. '>' .. content .. '</' .. tag .. '>')
end
end
end

@ -0,0 +1,17 @@
--[[
Turns a manpage reference into a link, when a mapping is defined below.
]]
local man_urls = {
["tmpfiles.d(5)"] = "https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html",
["nix.conf(5)"] = "https://nixos.org/manual/nix/stable/#sec-conf-file",
["systemd.time(7)"] = "https://www.freedesktop.org/software/systemd/man/systemd.time.html",
["systemd.timer(5)"] = "https://www.freedesktop.org/software/systemd/man/systemd.timer.html",
}
function Code(elem)
local is_man_role = elem.classes:includes('interpreted-text') and elem.attributes['role'] == 'manpage'
if is_man_role and man_urls[elem.text] ~= nil then
return pandoc.Link(elem, man_urls[elem.text])
end
end

@ -0,0 +1,29 @@
--[[
Replaces Str AST nodes containing {role}, followed by a Code node
by a Code node with attrs that would be produced by rST reader
from the role syntax.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Inlines(inlines)
for i = #inlines-1,1,-1 do
local first = inlines[i]
local second = inlines[i+1]
local correct_tags = first.tag == 'Str' and second.tag == 'Code'
if correct_tags then
-- docutils supports alphanumeric strings separated by [-._:]
-- We are slightly more liberal for simplicity.
local role = first.text:match('^{([-._+:%w]+)}$')
if role ~= nil then
inlines:remove(i)
second.attributes['role'] = role
second.classes:insert('interpreted-text')
end
end
end
return inlines
end

@ -0,0 +1,25 @@
--[[
Replaces Code nodes with attrs that would be produced by rST reader
from the role syntax by a Str AST node containing {role}, followed by a Code node.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Code(elem)
local role = elem.attributes['role']
if elem.classes:includes('interpreted-text') and role ~= nil then
elem.classes = elem.classes:filter(function (c)
return c ~= 'interpreted-text'
end)
elem.attributes['role'] = nil
return {
pandoc.Str('{' .. role .. '}'),
elem,
}
end
end

@ -1,8 +1,16 @@
# Fetchers {#chap-pkgs-fetchers}
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
When using Nix, you will frequently need to download source code and other files from the internet. For this purpose, Nix provides the [_fixed output derivation_](https://nixos.org/manual/nix/stable/#fixed-output-drvs) feature and Nixpkgs provides various functions that implement the actual fetching from various protocols and services.
The two fetcher primitives are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
## Caveats
Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#tester-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
## `fetchurl` and `fetchzip` {#fetchurl}
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of `fetchurl` is provided below.
```nix
{ stdenv, fetchurl }:
@ -16,63 +24,91 @@ stdenv.mkDerivation {
}
```
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
The main difference between `fetchurl` and `fetchzip` is in how they store the contents. `fetchurl` will store the unaltered contents of the URL within the Nix store. `fetchzip` on the other hand, will decompress the archive for you, making files and directories directly accessible in the future. `fetchzip` can only be used with archives. Despite the name, `fetchzip` is not limited to .zip files and can also be used with any tarball.
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example, it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward nambes based on the name of the command used with the VCS system. Because they give you a working repository, they act most like `fetchzip`.
Most other fetchers return a directory rather than a single file.
## `fetchsvn`
## `fetchsvn` {#fetchsvn}
Used with Subversion. Expects `url` to a Subversion directory, `rev`, and `sha256`.
## `fetchgit`
## `fetchgit` {#fetchgit}
Used with Git. Expects `url` to a Git repo, `rev`, and `sha256`. `rev` in this case can be full the git commit id (SHA1 hash) or a tag name like `refs/tags/v1.0`.
Additionally the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
Additionally, the following optional arguments can be given: `fetchSubmodules = true` makes `fetchgit` also fetch the submodules of a repository. If `deepClone` is set to true, the entire repository is cloned as opposing to just creating a shallow clone. `deepClone = true` also implies `leaveDotGit = true` which means that the `.git` directory of the clone won't be removed after checkout.
## `fetchfossil`
If only parts of the repository are needed, `sparseCheckout` can be used. This will prevent git from fetching unnecessary blobs from server, see [git sparse-checkout](https://git-scm.com/docs/git-sparse-checkout) and [git clone --filter](https://git-scm.com/docs/git-clone#Documentation/git-clone.txt---filterltfilter-specgt) for more information:
```nix
{ stdenv, fetchgit }:
stdenv.mkDerivation {
name = "hello";
src = fetchgit {
url = "https://...";
sparseCheckout = ''
path/to/be/included
another/path
'';
sha256 = "0000000000000000000000000000000000000000000000000000";
};
}
```
## `fetchfossil` {#fetchfossil}
Used with Fossil. Expects `url` to a Fossil archive, `rev`, and `sha256`.
## `fetchcvs`
## `fetchcvs` {#fetchcvs}
Used with CVS. Expects `cvsRoot`, `tag`, and `sha256`.
## `fetchhg`
## `fetchhg` {#fetchhg}
Used with Mercurial. Expects `url`, `rev`, and `sha256`.
A number of fetcher functions wrap part of `fetchurl` and `fetchzip`. They are mainly convenience functions intended for commonly used destinations of source code in Nixpkgs. These wrapper fetchers are listed below.
## `fetchFromGitHub`
## `fetchFromGitea` {#fetchfromgitea}
`fetchFromGitea` expects five arguments. `domain` is the gitea server name. `owner` is a string corresponding to the Gitea user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every Gitea HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available but `sha256` is currently preferred.
## `fetchFromGitHub` {#fetchfromgithub}
`fetchFromGitHub` expects four arguments. `owner` is a string corresponding to the GitHub user or organization that controls this repository. `repo` corresponds to the name of the software repository. These are located at the top of every GitHub HTML page as `owner`/`repo`. `rev` corresponds to the Git commit hash or tag (e.g `v1.0`) that will be downloaded from Git. Finally, `sha256` corresponds to the hash of the extracted directory. Again, other hash algorithms are also available, but `sha256` is currently preferred.
`fetchFromGitHub` uses `fetchzip` to download the source archive generated by GitHub for the specified revision. If `leaveDotGit`, `deepClone` or `fetchSubmodules` are set to `true`, `fetchFromGitHub` will use `fetchgit` instead. Refer to its section for documentation of these options.
## `fetchFromGitLab`
## `fetchFromGitLab` {#fetchfromgitlab}
This is used with GitLab repositories. The arguments expected are very similar to fetchFromGitHub above.
This is used with GitLab repositories. The arguments expected are very similar to `fetchFromGitHub` above.
## `fetchFromGitiles`
## `fetchFromGitiles` {#fetchfromgitiles}
This is used with Gitiles repositories. The arguments expected are similar to fetchgit.
This is used with Gitiles repositories. The arguments expected are similar to `fetchgit`.
## `fetchFromBitbucket`
## `fetchFromBitbucket` {#fetchfrombitbucket}
This is used with BitBucket repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromSavannah`
## `fetchFromSavannah` {#fetchfromsavannah}
This is used with Savannah repositories. The arguments expected are very similar to `fetchFromGitHub` above.
This is used with Savannah repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromRepoOrCz` {#fetchfromrepoorcz}
## `fetchFromRepoOrCz`
This is used with repo.or.cz repositories. The arguments expected are very similar to `fetchFromGitHub` above.
This is used with repo.or.cz repositories. The arguments expected are very similar to fetchFromGitHub above.
## `fetchFromSourcehut` {#fetchfromsourcehut}
## `fetchFromSourcehut`
This is used with sourcehut repositories. Similar to `fetchFromGitHub` above,
it expects `owner`, `repo`, `rev` and `sha256`, but don't forget the tilde (~)
in front of the username! Expected arguments also include `vc` ("git" (default)
or "hg"), `domain` and `fetchSubmodules`.
This is used with sourcehut repositories. The arguments expected are very similar to fetchFromGitHub above. Don't forget the tilde (~) in front of the user name!
If `fetchSubmodules` is `true`, `fetchFromSourcehut` uses `fetchgit`
or `fetchhg` with `fetchSubmodules` or `fetchSubrepos` set to `true`,
respectively. Otherwise, the fetcher uses `fetchzip`.

@ -2,7 +2,7 @@
`pkgs.appimageTools` is a set of functions for extracting and wrapping [AppImage](https://appimage.org/) files. They are meant to be used if traditional packaging from source is infeasible, or it would take too long. To quickly run an AppImage file, `pkgs.appimage-run` can be used as well.
::: warning
::: {.warning}
The `appimageTools` API is unstable and may be subject to backwards-incompatible changes in the future.
:::

@ -1,6 +1,6 @@
# pkgs.dockerTools {#sec-pkgs-dockerTools}
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions.
## buildImage {#ssec-pkgs-dockerTools-buildImage}
@ -52,13 +52,13 @@ The above example will build a Docker image `redis/latest` from the given base i
> **_NOTE:_** Using this parameter requires the `kvm` device to be available.
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied.
At the end of the process, only one new single layer will be produced and added to the resulting image.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`.
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage), it would be `redis/latest`.
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute.
@ -87,7 +87,7 @@ pkgs.dockerTools.buildImage {
}
```
and now the Docker CLI will display a reasonable date and sort the images as expected:
Now the Docker CLI will display a reasonable date and sort the images as expected:
```ShellSession
$ docker images
@ -95,7 +95,7 @@ REPOSITORY TAG IMAGE ID CREATED SIZE
hello latest de2bf4786de6 About a minute ago 25.2MB
```
however, the produced images will not be binary reproducible.
However, the produced images will not be binary reproducible.
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage}
@ -119,13 +119,13 @@ Create a Docker image with many of the store paths being on their own layer to i
`contents` _optional_
: Top level paths in the container. Either a single derivation, or a list of derivations.
: Top-level paths in the container. Either a single derivation, or a list of derivations.
*Default:* `[]`
`config` _optional_
: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
: Run-time configuration of the container. A full list of the options are available at in the [Docker Image Specification v1.2.0](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions).
*Default:* `{}`
@ -151,6 +151,12 @@ Create a Docker image with many of the store paths being on their own layer to i
: Shell commands to run while creating the archive for the final layer in a fakeroot environment. Unlike `extraCommands`, you can run `chown` to change the owners of the files in the archive, changing fakeroot's state instead of the real filesystem. The latter would require privileges that the build user does not have. Static binaries do not interact with the fakeroot environment. By default all files in the archive will be owned by root.
`enableFakechroot` _optional_
: Whether to run in `fakeRootCommands` in `fakechroot`, making programs behave as though `/` is the root of the image being created, while files in the Nix store are available as usual. This allows scripts that perform installation in `/` to work as expected. Considering that `fakechroot` is implemented via the same mechanism as `fakeroot`, the same caveats apply.
*Default:* `false`
### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents}
Each path directly listed in `contents` will have a symlink in the root of the image.
@ -189,9 +195,9 @@ pkgs.dockerTools.buildLayeredImage {
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images.
Modern Docker installations support up to 128 layers, however older versions support as few as 42.
Modern Docker installations support up to 128 layers, but older versions support as few as 42.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further.
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However, it will be impossible to extend the image further.
The first (`maxLayers-2`) most "popular" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining "unpopular" paths, and finally layer \#`maxLayers` will contain the Image configuration.
@ -207,7 +213,7 @@ The image produced by running the output script can be piped directly into `dock
$(nix-build) | docker load
```
Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry:
Alternatively, the image be piped via `gzip` into `skopeo`, e.g., to copy it into a registry:
```ShellSession
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag

@ -1,10 +1,10 @@
# pkgs.ociTools {#sec-pkgs-ociTools}
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that it makes no assumptions about the container runner you choose to use to run the created container.
`pkgs.ociTools` is a set of functions for creating containers according to the [OCI container specification v1.0.0](https://github.com/opencontainers/runtime-spec). Beyond that, it makes no assumptions about the container runner you choose to use to run the created container.
## buildContainer {#ssec-pkgs-ociTools-buildContainer}
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory.The nix store of the container will contain all referenced dependencies of the given command.
This function creates a simple OCI container that runs a single command inside of it. An OCI container consists of a `config.json` and a rootfs directory. The nix store of the container will contain all referenced dependencies of the given command.
The parameters of `buildContainer` with an example value are described below:
@ -30,7 +30,7 @@ buildContainer {
}
```
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container
- `args` specifies a set of arguments to run inside the container. This is the only required argument for `buildContainer`. All referenced packages inside the derivation will be made available inside the container.
- `mounts` specifies additional mount points chosen by the user. By default only a minimal set of necessary filesystems are mounted into the container (e.g procfs, cgroupfs)

@ -14,7 +14,7 @@ Currently, `makeSnap` does not support creating GUI stubs.
The following expression packages GNU Hello as a Snapcraft snap.
```{#ex-snapTools-buildSnap-hello .nix}
``` {#ex-snapTools-buildSnap-hello .nix}
let
inherit (import <nixpkgs> { }) snapTools hello;
in snapTools.makeSnap {
@ -33,9 +33,9 @@ in snapTools.makeSnap {
## Build a Graphical Snap {#ssec-pkgs-snapTools-build-a-snap-firefox}
Graphical programs require many more integrations with the host. This example uses Firefox as an example, because it is one of the most complicated programs we could package.
Graphical programs require many more integrations with the host. This example uses Firefox as an example because it is one of the most complicated programs we could package.
```{#ex-snapTools-buildSnap-firefox .nix}
``` {#ex-snapTools-buildSnap-firefox .nix}
let
inherit (import <nixpkgs> { }) snapTools firefox;
in snapTools.makeSnap {

@ -1,6 +1,6 @@
# Cataclysm: Dark Days Ahead {#cataclysm-dark-days-ahead}
## How to install Cataclysm DDA
## How to install Cataclysm DDA {#how-to-install-cataclysm-dda}
To install the latest stable release of Cataclysm DDA to your profile, execute
`nix-env -f "<nixpkgs>" -iA cataclysm-dda`. For the curses build (build
@ -34,7 +34,7 @@ cataclysm-dda.override {
}
```
## Important note for overriding packages
## Important note for overriding packages {#important-note-for-overriding-packages}
After applying `overrideAttrs`, you need to fix `passthru.pkgs` and
`passthru.withMods` attributes either manually or by using `attachPkgs`:
@ -69,7 +69,7 @@ in
goodExample2.withMods (_: []) # parallel building enabled
```
## Customizing with mods
## Customizing with mods {#customizing-with-mods}
To install Cataclysm DDA with mods of your choice, you can use `withMods`
attribute:

@ -4,13 +4,13 @@ The [Citrix Workspace App](https://www.citrix.com/products/workspace-app/) is a
## Basic usage {#sec-citrix-base}
The tarball archive needs to be downloaded manually as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store the package can be built and installed with Nix.
The tarball archive needs to be downloaded manually, as the license agreements of the vendor for [Citrix Workspace](https://www.citrix.de/downloads/workspace-app/linux/workspace-app-for-linux-latest.html) needs to be accepted first. Then run `nix-prefetch-url file://$PWD/linuxx64-$version.tar.gz`. With the archive available in the store, the package can be built and installed with Nix.
## Citrix Selfservice {#sec-citrix-selfservice}
## Citrix Self-service {#sec-citrix-selfservice}
The [selfservice](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
The [self-service](https://support.citrix.com/article/CTX200337) is an application managing Citrix desktops and applications. Please note that this feature only works with at least citrix_workspace_20_06_0 and later versions.
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that you can configure the `selfservice` like this:
In order to set this up, you first have to [download the `.cr` file from the Netscaler Gateway](https://its.uiowa.edu/support/article/102186). After that, you can configure the `selfservice` like this:
```ShellSession
$ storebrowse -C ~/Downloads/receiverconfig.cr
@ -19,7 +19,7 @@ $ selfservice
## Custom certificates {#sec-citrix-custom-certs}
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However several companies using Citrix might require their own corporate certificate. On distros with imperative packaging these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
The `Citrix Workspace App` in `nixpkgs` trusts several certificates [from the Mozilla database](https://curl.haxx.se/docs/caextract.html) by default. However, several companies using Citrix might require their own corporate certificate. On distros with imperative packaging, these certs can be stored easily in [`$ICAROOT`](https://developer-docs.citrix.com/projects/receiver-for-linux-command-reference/en/13.7/), however this directory is a store path in `nixpkgs`. In order to work around this issue, the package provides a simple mechanism to add custom certificates without rebuilding the entire package using `symlinkJoin`:
```nix
with import <nixpkgs> { config.allowUnfree = true; };

@ -8,9 +8,9 @@ Nixpkgs provides a number of packages that will install Eclipse in its various f
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
```
Once an Eclipse variant is installed it can be run using the `eclipse` command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
Once an Eclipse variant is installed, it can be run using the `eclipse` command, as expected. From within Eclipse, it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
If you prefer to install plugins in a more declarative manner then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add
If you prefer to install plugins in a more declarative manner, then Nixpkgs also offer a number of Eclipse plugins that can be installed in an _Eclipse environment_. This type of environment is created using the function `eclipseWithPlugins` found inside the `nixpkgs.eclipses` attribute set. This function takes as argument `{ eclipse, plugins ? [], jvmArgs ? [] }` where `eclipse` is a one of the Eclipse packages described above, `plugins` is a list of plugin derivations, and `jvmArgs` is a list of arguments given to the JVM running the Eclipse. For example, say you wish to install the latest Eclipse Platform with the popular Eclipse Color Theme plugin and also allow Eclipse to use more RAM. You could then add:
```nix
packageOverrides = pkgs: {
@ -22,15 +22,15 @@ packageOverrides = pkgs: {
}
```
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running
to your Nixpkgs configuration (`~/.config/nixpkgs/config.nix`) and install it by running `nix-env -f '<nixpkgs>' -iA myEclipse` and afterward run Eclipse as usual. It is possible to find out which plugins are available for installation using `eclipseWithPlugins` by running:
```ShellSession
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
```
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
If there is a need to install plugins that are not available in Nixpkgs then it may be possible to define these plugins outside Nixpkgs using the `buildEclipseUpdateSite` and `buildEclipsePlugin` functions found in the `nixpkgs.eclipses.plugins` attribute set. Use the `buildEclipseUpdateSite` function to install a plugin distributed as an Eclipse update site. This function takes `{ name, src }` as argument, where `src` indicates the Eclipse update site archive. All Eclipse features and plugins within the downloaded update site will be installed. When an update site archive is not available, then the `buildEclipsePlugin` function can be used to install a plugin that consists of a pair of feature and plugin JARs. This function takes an argument `{ name, srcFeature, srcPlugin }` where `srcFeature` and `srcPlugin` are the feature and plugin JARs, respectively.
Expanding the previous example with two plugins using the above functions we have
Expanding the previous example with two plugins using the above functions, we have:
```nix
packageOverrides = pkgs: {

@ -1,11 +1,11 @@
# Elm {#sec-elm}
To start a development environment do
To start a development environment, run:
```ShellSession
nix-shell -p elmPackages.elm elmPackages.elm-format
```
To update the Elm compiler, see <filename>nixpkgs/pkgs/development/compilers/elm/README.md</filename>.
To update the Elm compiler, see `nixpkgs/pkgs/development/compilers/elm/README.md`.
To package Elm applications, [read about elm2nix](https://github.com/hercules-ci/elm2nix#elm2nix).

@ -20,7 +20,7 @@ The Emacs package comes with some extra helpers to make it easier to configure.
}
```
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provide a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
You can install it like any other packages via `nix-env -iA myEmacs`. However, this will only install those packages. It will not `configure` them for us. To do this, we need to provide a configuration file. Luckily, it is possible to do this from within Nix! By modifying the above example, we can make Emacs load a custom config file. The key is to create a package that provides a `default.el` file in `/share/emacs/site-start/`. Emacs knows to load this file automatically when it starts.
```nix
{
@ -101,16 +101,16 @@ You can install it like any other packages via `nix-env -iA myEmacs`. However, t
}
```
This provides a fairly full Emacs start file. It will load in addition to the user's presonal config. You can always disable it by passing `-q` to the Emacs command.
This provides a fairly full Emacs start file. It will load in addition to the user's personal config. You can always disable it by passing `-q` to the Emacs command.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control this priorities when some package is installed as a dependency. You can override it on per-package-basis, providing all the required dependencies manually - but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package you can use `overrideScope'`.
Sometimes `emacs.pkgs.withPackages` is not enough, as this package set has some priorities imposed on packages (with the lowest priority assigned to Melpa Unstable, and the highest for packages manually defined in `pkgs/top-level/emacs-packages.nix`). But you can't control these priorities when some package is installed as a dependency. You can override it on a per-package-basis, providing all the required dependencies manually, but it's tedious and there is always a possibility that an unwanted dependency will sneak in through some other package. To completely override such a package, you can use `overrideScope'`.
```nix
overrides = self: super: rec {
haskell-mode = self.melpaPackages.haskell-mode;
...
};
((emacsPackagesFor emacs).overrideScope' overrides).emacs.pkgs.withPackages
((emacsPackagesFor emacs).overrideScope' overrides).withPackages
(p: with p; [
# here both these package will use haskell-mode of our own choice
ghc-mod

@ -0,0 +1,18 @@
# /etc files {#etc}
Certain calls in glibc require access to runtime files found in `/etc` such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
On non-NixOS distributions these files are typically provided by packages (i.e., [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your `buildInputs`, then it will set the environment variables `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a setup hook.
```bash
> nix-shell -p iana-etc
[nix-shell:~]$ env | grep NIX_ETC
NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/services
NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
```
Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variables are *not* set, then it will attempt to find the files at the default location within `/etc`.

@ -1,12 +1,13 @@
# Firefox {#sec-firefox}
## Build wrapped Firefox with extensions and policies
## Build wrapped Firefox with extensions and policies {#build-wrapped-firefox-with-extensions-and-policies}
The `wrapFirefox` function allows to pass policies, preferences and extension that are available to firefox. With the help of `fetchFirefoxAddon` this allows build a firefox version that already comes with addons pre-installed:
The `wrapFirefox` function allows to pass policies, preferences and extensions that are available to Firefox. With the help of `fetchFirefoxAddon` this allows to build a Firefox version that already comes with add-ons pre-installed:
```nix
{
myFirefox = wrapFirefox firefox-unwrapped {
# Nix firefox addons only work with the firefox-esr package.
myFirefox = wrapFirefox firefox-esr-unwrapped {
nixExtensions = [
(fetchFirefoxAddon {
name = "ublock"; # Has to be unique!
@ -39,11 +40,12 @@ The `wrapFirefox` function allows to pass policies, preferences and extension th
}
```
If `nixExtensions != null` then all manually installed addons will be uninstalled from your browser profile.
To view available enterprise policies visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox url bar: `about:policies#documentation`.
Nix installed addons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded addons are checksumed and manual addons can't be installed. Also make sure that the `name` field of fetchFirefoxAddon is unique. If you remove an addon from the nixExtensions array, rebuild and start Firefox the removed addon will be completly removed with all of its settings.
If `nixExtensions != null`, then all manually installed add-ons will be uninstalled from your browser profile.
To view available enterprise policies, visit [enterprise policies](https://github.com/mozilla/policy-templates#enterprisepoliciesenabled)
or type into the Firefox URL bar: `about:policies#documentation`.
Nix installed add-ons do not have a valid signature, which is why signature verification is disabled. This does not compromise security because downloaded add-ons are checksummed and manual add-ons can't be installed. Also, make sure that the `name` field of `fetchFirefoxAddon` is unique. If you remove an add-on from the `nixExtensions` array, rebuild and start Firefox: the removed add-on will be completely removed with all of its settings.
## Troubleshooting {#sec-firefox-troubleshooting}
If addons do not appear installed although they have been defined in your nix configuration file reset the local addon state of your Firefox profile by clicking `help -> restart with addons disabled -> restart -> refresh firefox`. This can happen if you switch from manual addon mode to nix addon mode and then back to manual mode and then again to nix addon mode.
If add-ons are marked as broken or the signature is invalid, make sure you have Firefox ESR installed. Normal Firefox does not provide the ability anymore to disable signature verification for add-ons thus nix add-ons get disabled by the normal Firefox binary.
If add-ons do not appear installed despite being defined in your nix configuration file, reset the local add-on state of your Firefox profile by clicking `Help -> More Troubleshooting Information -> Refresh Firefox`. This can happen if you switch from manual add-on mode to nix add-on mode and then back to manual mode and then again to nix add-on mode.

@ -36,7 +36,7 @@ using `buildFishPlugin` and running unit tests with the `fishtape` test runner.
## Fish wrapper {#sec-fish-wrapper}
The `wrapFish` package is a wrapper around Fish which can be used to create
Fish shells initialised with some plugins as well as completions, configuration
Fish shells initialized with some plugins as well as completions, configuration
snippets and functions sourced from the given paths. This provides a convenient
way to test Fish plugins and scripts without having to alter the environment.

@ -24,10 +24,10 @@ packages on macOS:
checking for fuse.h... no
configure: error: No fuse.h found.
This happens on autoconf based projects that uses `AC_CHECK_HEADERS` or
This happens on autoconf based projects that use `AC_CHECK_HEADERS` or
`AC_CHECK_LIBS` to detect libfuse, and will occur even when the `fuse` package
is included in `buildInputs`. It happens because libfuse headers throw an error
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many proejcts do define
on macOS if the `FUSE_USE_VERSION` macro is undefined. Many projects do define
`FUSE_USE_VERSION`, but only inside C source files. This results in the above
error at configure time because the configure script would attempt to compile
sample FUSE programs without defining `FUSE_USE_VERSION`.

@ -6,7 +6,7 @@ This package is an ibus-based completion method to speed up typing.
IBus needs to be configured accordingly to activate `typing-booster`. The configuration depends on the desktop manager in use. For detailed instructions, please refer to the [upstream docs](https://mike-fabian.github.io/ibus-typing-booster/documentation.html).
On NixOS you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
On NixOS, you need to explicitly enable `ibus` with given engines before customizing your desktop to use `typing-booster`. This can be achieved using the `ibus` module:
```nix
{ pkgs, ... }: {
@ -19,7 +19,7 @@ On NixOS you need to explicitly enable `ibus` with given engines before customiz
## Using custom hunspell dictionaries {#sec-ibus-typing-booster-customize-hunspell}
The IBus engine is based on `hunspell` to support completion in many languages. By default the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
The IBus engine is based on `hunspell` to support completion in many languages. By default, the dictionaries `de-de`, `en-us`, `fr-moderne` `es-es`, `it-it`, `sv-se` and `sv-fi` are in use. To add another dictionary, the package can be overridden like this:
```nix
ibus-engines.typing-booster.override { langs = [ "de-at" "en-gb" ]; }
@ -31,7 +31,7 @@ _Note: each language passed to `langs` must be an attribute name in `pkgs.hunspe
The `ibus-engines.typing-booster` package contains a program named `emoji-picker`. To display all emojis correctly, a special font such as `noto-fonts-emoji` is needed:
On NixOS it can be installed using the following expression:
On NixOS, it can be installed using the following expression:
```nix
{ pkgs, ... }: { fonts.fonts = with pkgs; [ noto-fonts-emoji ]; }

@ -17,6 +17,7 @@
<xi:include href="kakoune.section.xml" />
<xi:include href="linux.section.xml" />
<xi:include href="locales.section.xml" />
<xi:include href="etc-files.section.xml" />
<xi:include href="nginx.section.xml" />
<xi:include href="opengl.section.xml" />
<xi:include href="shell-helpers.section.xml" />

@ -4,7 +4,7 @@ The Nix expressions to build the Linux kernel are in [`pkgs/os-specific/linux/ke
The function that builds the kernel has an argument `kernelPatches` which should be a list of `{name, patch, extraConfig}` attribute sets, where `name` is the name of the patch (which is included in the kernel’s `meta.description` attribute), `patch` is the patch itself (possibly compressed), and `extraConfig` (optional) is a string specifying extra options to be concatenated to the kernel configuration file (`.config`).
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e. has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
The kernel derivation exports an attribute `features` specifying whether optional functionality is or isn’t enabled. This is used in NixOS to implement kernel-specific behaviour. For instance, if the kernel has the `iwlwifi` feature (i.e., has built-in support for Intel wireless chipsets), then NixOS doesn’t have to build the external `iwlwifi` package:
```nix
modulesTree = [kernel]
@ -14,28 +14,28 @@ modulesTree = [kernel]
How to add a new (major) version of the Linux kernel to Nixpkgs:
1. Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it.
1. Copy the old Nix expression (e.g., `linux-2.6.21.nix`) to the new one (e.g., `linux-2.6.22.nix`) and update it.
2. Add the new kernel to `all-packages.nix` (e.g., create an attribute `kernel_2_6_22`).
2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
3. Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
1. Make an copy from the old config (e.g. `config-2.6.21-i686-smp`) to the new one (e.g. `config-2.6.22-i686-smp`).
1. Make a copy from the old config (e.g., `config-2.6.21-i686-smp`) to the new one (e.g., `config-2.6.22-i686-smp`).
2. Copy the config file for this platform (e.g. `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
2. Copy the config file for this platform (e.g., `config-2.6.22-i686-smp`) to `.config` in the kernel source tree.
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e. don’t enable some feature on `i686` and disable it on `x86_64`).
3. Run `make oldconfig ARCH={i386,x86_64,um}` and answer all questions. (For the uml configuration, also add `SHELL=bash`.) Make sure to keep the configuration consistent between platforms (i.e., don’t enable some feature on `i686` and disable it on `x86_64`).
4. If needed you can also run `make menuconfig`:
4. If needed, you can also run `make menuconfig`:
```ShellSession
$ nix-env -i ncurses
$ nix-env -f "<nixpkgs>" -iA ncurses
$ export NIX_CFLAGS_LINK=-lncurses
$ make menuconfig ARCH=arch
```
5. Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`).
5. Copy `.config` over the new config file (e.g., `config-2.6.22-i686-smp`).
4. Test building the kernel: `nix-build -A kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `all-packages.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `linux-kernels.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.

@ -1,5 +1,5 @@
# Locales {#locales}
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
To allow simultaneous use of packages linked against different versions of `glibc` with different locale archive formats, Nixpkgs patches `glibc` to rely on `LOCALE_ARCHIVE` environment variable.
On non-NixOS distributions this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.
On non-NixOS distributions, this variable is obviously not set. This can cause regressions in language support or even crashes in some Nixpkgs-provided programs. The simplest way to mitigate this problem is exporting the `LOCALE_ARCHIVE` variable pointing to `${glibcLocales}/lib/locale/locale-archive`. The drawback (and the reason this is not the default) is the relatively large (a hundred MiB) size of the full set of locales. It is possible to build a custom set of locales by overriding parameters `allLocales` and `locales` of the package.

@ -4,8 +4,8 @@
## ETags on static files served from the Nix store {#sec-nginx-etag}
HTTP has a couple different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store, because all file timestamps are set to 0 (for reasons related to build reproducibility).
HTTP has a couple of different mechanisms for caching to prevent clients from having to download the same content repeatedly if a resource has not changed since the last time it was requested. When nginx is used as a server for static files, it implements the caching mechanism based on the [`Last-Modified`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified) response header automatically; unfortunately, it works by using filesystem timestamps to determine the value of the `Last-Modified` header. This doesn't give the desired behavior when the file is in the Nix store because all file timestamps are set to 0 (for reasons related to build reproducibility).
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g. a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
Fortunately, HTTP supports an alternative (and more effective) caching mechanism: the [`ETag`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag) response header. The value of the `ETag` header specifies some identifier for the particular content that the server is sending (e.g., a hash). When a client makes a second request for the same resource, it sends that value back in an `If-None-Match` header. If the ETag value is unchanged, then the server does not need to resend the content.
As of NixOS 19.09, the nginx package in Nixpkgs is patched such that when nginx serves a file out of `/nix/store`, the hash in the store path is used as the `ETag` header in the HTTP response, thus providing proper caching functionality. This happens automatically; you do not need to do modify any configuration to get this behavior.

@ -4,12 +4,12 @@ OpenGL support varies depending on which hardware is used and which drivers are
Broadly, we support both GL vendors: Mesa and NVIDIA.
## NixOS Desktop
## NixOS Desktop {#nixos-desktop}
The NixOS desktop or other non-headless configurations are the primary target for OpenGL libraries and applications. The current solution for discovering which drivers are available is based on [libglvnd](https://gitlab.freedesktop.org/glvnd/libglvnd). `libglvnd` performs "vendor-neutral dispatch", trying a variety of techniques to find the system's GL implementation. In practice, this will be either via standard GLX for X11 users or EGL for Wayland users, and supporting either NVIDIA or Mesa extensions.
## Nix on GNU/Linux
## Nix on GNU/Linux {#nix-on-gnulinux}
If you are using a non-NixOS GNU/Linux/X11 desktop with free software video drivers, consider launching OpenGL-dependent programs from Nixpkgs with Nixpkgs versions of `libglvnd` and `mesa.drivers` in `LD_LIBRARY_PATH`. For Mesa drivers, the Linux kernel version doesn't have to match nixpkgs.
For proprietary video drivers you might have luck with also adding the corresponding video driver package.
For proprietary video drivers, you might have luck with also adding the corresponding video driver package.

@ -4,7 +4,7 @@ Some packages provide the shell integration to be more useful. But unlike other
- `fzf` : `fzf-share`
E.g. `fzf` can then used in the `.bashrc` like this:
E.g. `fzf` can then be used in the `.bashrc` like this:
```bash
source "$(fzf-share)/completion.bash"

@ -2,24 +2,25 @@
## Steam in Nix {#sec-steam-nix}
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in \$HOME.
Steam is distributed as a `.deb` file, for now only as an i686 package (the amd64 package only has documentation). When unpacked, it has a script called `steam` that in Ubuntu (their target distro) would go to `/usr/bin`. When run for the first time, this script copies some files to the user's home, which include another script that is the ultimate responsible for launching the steam binary, which is also in `$HOME`.
Nix problems and constraints:
- We don't have `/bin/bash` and many scripts point there. Similarly for `/usr/bin/python`.
- We don't have `/bin/bash` and many scripts point there. Same thing for `/usr/bin/python`.
- We don't have the dynamic loader in `/lib`.
- The `steam.sh` script in \$HOME can not be patched, as it is checked and rewritten by steam.
- The `steam.sh` script in `$HOME` cannot be patched, as it is checked and rewritten by steam.
- The steam binary cannot be patched, it's also checked.
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented [here](http://sandervanderburg.blogspot.nl/2013/09/composing-fhs-compatible-chroot.html). This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
## How to play {#sec-steam-play}
Use `programs.steam.enable = true;` if you want to add steam to systemPackages and also enable a few workarrounds aswell as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pr.
Use `programs.steam.enable = true;` if you want to add steam to `systemPackages` and also enable a few workarounds as well as Steam controller support or other Steam supported controllers such as the DualShock 4 or Nintendo Switch Pro Controller.
## Troubleshooting {#sec-steam-troub}
- **Steam fails to start. What do I do?**
Try to run
```ShellSession
@ -31,38 +32,31 @@ Use `programs.steam.enable = true;` if you want to add steam to systemPackages a
- **Using the FOSS Radeon or nouveau (nvidia) drivers**
- The `newStdcpp` parameter was removed since NixOS 17.09 and should not be needed anymore.
- Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
- Steam ships statically linked with a version of `libcrypto` that conflicts with the one dynamically loaded by radeonsi_dri.so. If you get the error:
```
steam.sh: line 713: 7842 Segmentation fault (core dumped)
```
have a look at [this pull request](https://github.com/NixOS/nixpkgs/pull/20269).
- **Java**
1. There is no java in steam chrootenv by default. If you get a message like
1. There is no java in steam chrootenv by default. If you get a message like:
```
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
```
```
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
```
You need to add
you need to add:
```nix
steam.override { withJava = true; };
```
```nix
steam.override { withJava = true; };
```
## steam-run {#sec-steam-run}
The FHS-compatible chroot used for steam can also be used to run other linux games that expect a FHS environment. To do it, add
```nix
pkgs.steam.override ({
nativeOnly = true;
newStdcpp = true;
}).run
```
to your configuration, rebuild, and run the game with
The FHS-compatible chroot used for Steam can also be used to run other Linux games that expect a FHS environment. To use it, install the `steam-run` package and run the game with:
```
steam-run ./foo

@ -4,7 +4,7 @@ Urxvt, also known as rxvt-unicode, is a highly customizable terminal emulator.
## Configuring urxvt {#sec-urxvt-conf}
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as
In `nixpkgs`, urxvt is provided by the package `rxvt-unicode`. It can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, use an overlay or directly install an expression that overrides its configuration, such as:
```nix
rxvt-unicode.override {
@ -58,14 +58,14 @@ rxvt-unicode.override {
## Packaging urxvt plugins {#sec-urxvt-pkg}
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
Urxvt plugins resides in `pkgs/applications/misc/rxvt-unicode-plugins`. To add a new plugin, create an expression in a subdirectory and add the package to the set in `pkgs/applications/misc/rxvt-unicode-plugins/default.nix`.
A plugin can be any kind of derivation, the only requirement is that it should always install perl scripts in `$out/lib/urxvt/perl`. Look for existing plugins for examples.
If the plugin is itself a perl package that needs to be imported from other plugins or scripts, add the following passthrough:
If the plugin is itself a Perl package that needs to be imported from other plugins or scripts, add the following passthrough:
```nix
passthru.perlPackages = [ "self" ];
```
This will make the urxvt wrapper pick up the dependency and set up the perl path accordingly.
This will make the urxvt wrapper pick up the dependency and set up the Perl path accordingly.

@ -1,6 +1,6 @@
# Weechat {#sec-weechat}
# WeeChat {#sec-weechat}
Weechat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration such as
WeeChat can be configured to include your choice of plugins, reducing its closure size from the default configuration which includes all available plugins. To make use of this functionality, install an expression that overrides its configuration, such as:
```nix
weechat.override {configure = {availablePlugins, ...}: {
@ -13,7 +13,7 @@ If the `configure` function returns an attrset without the `plugins` attribute,
The plugins currently available are `python`, `perl`, `ruby`, `guile`, `tcl` and `lua`.
The python and perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
The Python and Perl plugins allows the addition of extra libraries. For instance, the `inotify.py` script in `weechat-scripts` requires D-Bus or libnotify, and the `fish.py` script requires `pycrypto`. To use these scripts, use the plugin's `withPackages` attribute:
```nix
weechat.override { configure = {availablePlugins, ...}: {
@ -41,7 +41,7 @@ weechat.override {
configure = { availablePlugins, ... }: {
init = ''
/set foo bar
/server add freenode chat.freenode.org
/server add libera irc.libera.chat
'';
};
}
@ -49,7 +49,7 @@ weechat.override {
Further values can be added to the list of commands when running `weechat --run-command "your-commands"`.
Additionally it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
Additionally, it's possible to specify scripts to be loaded when starting `weechat`. These will be loaded before the commands from `init`:
```nix
weechat.override {
@ -64,7 +64,7 @@ weechat.override {
}
```
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute which contains a list of all scripts inside the store path. Furthermore all scripts have to live in `$out/share`. An exemplary derivation looks like this:
In `nixpkgs` there's a subpackage which contains derivations for WeeChat scripts. Such derivations expect a `passthru.scripts` attribute, which contains a list of all scripts inside the store path. Furthermore, all scripts have to live in `$out/share`. An exemplary derivation looks like this:
```nix
{ stdenv, fetchurl }:

@ -2,7 +2,7 @@
The Nix expressions for the X.org packages reside in `pkgs/servers/x11/xorg/default.nix`. This file is automatically generated from lists of tarballs in an X.org release. As such it should not be modified directly; rather, you should modify the lists, the generator script or the file `pkgs/servers/x11/xorg/overrides.nix`, in which you can override or add to the derivations produced by the generator.
## Katamari Tarballs
## Katamari Tarballs {#katamari-tarballs}
X.org upstream releases used to include [katamari](https://en.wiktionary.org/wiki/%E3%81%8B%E3%81%9F%E3%81%BE%E3%82%8A) releases, which included a holistic recommended version for each tarball, up until 7.7. To create a list of tarballs in a katamari release:
@ -14,11 +14,11 @@ cat $(PRINT_PATH=1 nix-prefetch-url $url | tail -n 1) \
| sort > "tarballs-$release.list"
```
## Individual Tarballs
## Individual Tarballs {#individual-tarballs}
The upstream release process for [X11R7.8](https://x.org/wiki/Releases/7.8/) does not include a planned katamari. Instead, each component of X.org is released as its own tarball. We maintain `pkgs/servers/x11/xorg/tarballs.list` as a list of tarballs for each individual package. This list includes X.org core libraries and protocol descriptions, extra newer X11 interface libraries, like `xorg.libxcb`, and classic utilities which are largely unused but still available if needed, like `xorg.imake`.
## Generating Nix Expressions
## Generating Nix Expressions {#generating-nix-expressions}
The generator is invoked as follows:
@ -29,6 +29,6 @@ cd pkgs/servers/x11/xorg
For each of the tarballs in the `.list` files, the script downloads it, unpacks it, and searches its `configure.ac` and `*.pc.in` files for dependencies. This information is used to generate `default.nix`. The generator caches downloaded tarballs between runs. Pay close attention to the `NOT FOUND: $NAME` messages at the end of the run, since they may indicate missing dependencies. (Some might be optional dependencies, however.)
## Overriding the Generator
## Overriding the Generator {#overriding-the-generator}
If the expression for a package requires derivation attributes that the generator cannot figure out automatically (say, `patches` or a `postInstall` hook), you should modify `pkgs/servers/x11/xorg/overrides.nix`.

@ -18,6 +18,8 @@
Additional commands to be executed for finalizing the derivation with runner script.
- `runScript`
A command that would be executed inside the sandbox and passed all the command line arguments. It defaults to `bash`.
- `profile`
Optional script for `/etc/profile` within the sandbox.
One can create a simple environment using a `shell.nix` like that:
@ -28,7 +30,7 @@ One can create a simple environment using a `shell.nix` like that:
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
alsa-lib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
@ -36,10 +38,12 @@ One can create a simple environment using a `shell.nix` like that:
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
alsa-lib
]);
runScript = "bash";
}).env
```
Running `nix-shell` would then drop you into a shell with these libraries and binaries available. You can use this to run closed-source applications which expect FHS structure without hassles: simply change `runScript` to the application path, e.g. `./bin/start.sh` -- relative paths are supported.
Additionally, the FHS builder links all relocated gsettings-schemas (the glib setup-hook moves them to `share/gsettings-schemas/${name}/glib-2.0/schemas`) to their standard FHS location. This means you don't need to wrap binaries with `wrapGAppsHook`.

@ -1,17 +1,37 @@
# pkgs.mkShell {#sec-pkgs-mkShell}
`pkgs.mkShell` is a special kind of derivation that is only useful when using
it combined with `nix-shell`. It will in fact fail to instantiate when invoked
with `nix-build`.
`pkgs.mkShell` is a specialized `stdenv.mkDerivation` that removes some
repetition when using it with `nix-shell` (or `nix develop`).
## Usage {#sec-pkgs-mkShell-usage}
Here is a common usage example:
```nix
{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
# specify which packages to add to the shell environment
packages = [ pkgs.gnumake ];
# add all the dependencies, of the given packages, to the shell environment
inputsFrom = with pkgs; [ hello gnutar ];
inputsFrom = [ pkgs.hello pkgs.gnutar ];
shellHook = ''
export DEBUG=1
'';
}
```
## Attributes
* `name` (default: `nix-shell`). Set the name of the derivation.
* `packages` (default: `[]`). Add executable packages to the `nix-shell` environment.
* `inputsFrom` (default: `[]`). Add build dependencies of the listed derivations to the `nix-shell` environment.
* `shellHook` (default: `""`). Bash statements that are executed by `nix-shell`.
... all the attributes of `stdenv.mkDerivation`.
## Building the shell
This derivation output will contain a text file that contains a reference to
all the build inputs. This is useful in CI where we want to make sure that
every derivation, and its dependencies, build properly. Or when creating a GC
root so that the build dependencies don't get garbage-collected.

@ -0,0 +1,128 @@
# Testers {#chap-testers}
This chapter describes several testing builders which are available in the <literal>testers</literal> namespace.
## `testVersion` {#tester-testVersion}
Checks the command output contains the specified version
Although simplistic, this test assures that the main program
can run. While there's no substitute for a real test case,
it does catch dynamic linking errors and such. It also provides
some protection against accidentally building the wrong version,
for example when using an 'old' hash in a fixed-output derivation.
Examples:
```nix
passthru.tests.version = testVersion { package = hello; };
passthru.tests.version = testVersion {
package = seaweedfs;
command = "weed version";
};
passthru.tests.version = testVersion {
package = key;
command = "KeY --help";
# Wrong '2.5' version in the code. Drop on next version.
version = "2.5";
};
```
## `testEqualDerivation` {#tester-testEqualDerivation}
Checks that two packages produce the exact same build instructions.
This can be used to make sure that a certain difference of configuration,
such as the presence of an overlay does not cause a cache miss.
When the derivations are equal, the return value is an empty file.
Otherwise, the build log explains the difference via `nix-diff`.
Example:
```nix
testEqualDerivation
"The hello package must stay the same when enabling checks."
hello
(hello.overrideAttrs(o: { doCheck = true; }))
```
## `invalidateFetcherByDrvHash` {#tester-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
```nix
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};
```
## `nixosTest` {#tester-nixosTest}
Run a NixOS VM network test using this evaluation of Nixpkgs.
NOTE: This function is primarily for external use. NixOS itself uses `make-test-python.nix` directly. Packages defined in Nixpkgs [reuse NixOS tests via `nixosTests`, plural](#ssec-nixos-tests-linking).
It is mostly equivalent to the function `import ./make-test-python.nix` from the
[NixOS manual](https://nixos.org/nixos/manual/index.html#sec-nixos-tests),
except that the current application of Nixpkgs (`pkgs`) will be used, instead of
letting NixOS invoke Nixpkgs anew.
If a test machine needs to set NixOS options under `nixpkgs`, it must set only the
`nixpkgs.pkgs` option.
### Parameter
A [NixOS VM test network](https://nixos.org/nixos/manual/index.html#sec-nixos-tests), or path to it. Example:
```nix
{
name = "my-test";
nodes = {
machine1 = { lib, pkgs, nodes, ... }: {
environment.systemPackages = [ pkgs.hello ];
services.foo.enable = true;
};
# machine2 = ...;
};
testScript = ''
start_all()
machine1.wait_for_unit("foo.service")
machine1.succeed("hello | foo-send")
'';
}
```
### Result
A derivation that runs the VM test.
Notable attributes:
* `nodes`: the evaluated NixOS configurations. Useful for debugging and exploring the configuration.
* `driverInteractive`: a script that launches an interactive Python session in the context of the `testScript`.

@ -35,10 +35,10 @@ This works just like `runCommand`. The only difference is that it also provides
## `runCommandLocal` {#trivial-builder-runCommandLocal}
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network roundrip and can speed up a build.
Variant of `runCommand` that forces the derivation to be built locally, it is not substituted. This is intended for very cheap commands (<1s execution time). It saves on the network round-trip and can speed up a build.
::: note
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g. just copying some files to a different location or adding symlinks), because there the `system` is usually the same as `builtins.currentSystem`.
::: {.note}
This sets [`allowSubstitutes` to `false`](https://nixos.org/nix/manual/#adv-attr-allowSubstitutes), so only use `runCommandLocal` if you are certain the user will always have a builder for the `system` of the derivation. This should be true for most trivial use cases (e.g., just copying some files to a different location or adding symlinks) because there the `system` is usually the same as `builtins.currentSystem`.
:::
## `writeTextFile`, `writeText`, `writeTextDir`, `writeScript`, `writeScriptBin` {#trivial-builder-writeText}
@ -47,9 +47,133 @@ These functions write `text` to the Nix store. This is useful for creating scrip
Many more commands wrap `writeTextFile` including `writeText`, `writeTextDir`, `writeScript`, and `writeScriptBin`. These are convenience functions over `writeTextFile`.
Here are a few examples:
```nix
# Writes my-file to /nix/store/<store path>
writeTextFile {
name = "my-file";
text = ''
Contents of File
'';
}
# See also the `writeText` helper function below.
# Writes executable my-file to /nix/store/<store path>/bin/my-file
writeTextFile {
name = "my-file";
text = ''
Contents of File
'';
executable = true;
destination = "/bin/my-file";
}
# Writes contents of file to /nix/store/<store path>
writeText "my-file"
''
Contents of File
'';
# Writes contents of file to /nix/store/<store path>/share/my-file
writeTextDir "share/my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path> and makes executable
writeScript "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
writeScriptBin "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path> and makes executable.
writeShellScript "my-file"
''
Contents of File
'';
# Writes my-file to /nix/store/<store path>/bin/my-file and makes executable.
writeShellScriptBin "my-file"
''
Contents of File
'';
```
## `concatTextFile`, `concatText`, `concatScript` {#trivial-builder-concatText}
These functions concatenate `files` to the Nix store in a single file. This is useful for configuration files structured in lines of text. `concatTextFile` takes an attribute set and expects two arguments, `name` and `files`. `name` corresponds to the name used in the Nix store path. `files` will be the files to be concatenated. You can also set `executable` to true to make this file have the executable bit set.
`concatText` and`concatScript` are simple wrappers over `concatTextFile`.
Here are a few examples:
```nix
# Writes my-file to /nix/store/<store path>
concatTextFile {
name = "my-file";
files = [ drv1 "${drv2}/path/to/file" ];
}
# See also the `concatText` helper function below.
# Writes executable my-file to /nix/store/<store path>/bin/my-file
concatTextFile {
name = "my-file";
files = [ drv1 "${drv2}/path/to/file" ];
executable = true;
destination = "/bin/my-file";
}
# Writes contents of files to /nix/store/<store path>
concatText "my-file" [ file1 file2 ]
# Writes contents of files to /nix/store/<store path>
concatScript "my-file" [ file1 file2 ]
```
## `writeShellApplication` {#trivial-builder-writeShellApplication}
This can be used to easily produce a shell script that has some dependencies (`runtimeInputs`). It automatically sets the `PATH` of the script to contain all of the listed inputs, sets some sanity shellopts (`errexit`, `nounset`, `pipefail`), and checks the resulting script with [`shellcheck`](https://github.com/koalaman/shellcheck).
For example, look at the following code:
```nix
writeShellApplication {
name = "show-nixos-org";
runtimeInputs = [ curl w3m ];
text = ''
curl -s 'https://nixos.org' | w3m -dump -T text/html
'';
}
```
Unlike with normal `writeShellScriptBin`, there is no need to manually write out `${curl}/bin/curl`, setting the PATH
was handled by `writeShellApplication`. Moreover, the script is being checked with `shellcheck` for more strict
validation.
## `symlinkJoin` {#trivial-builder-symlinkJoin}
This can be used to put many derivations into the same directory structure. It works by creating a new derivation and adding symlinks to each of the paths listed. It expects two arguments, `name`, and `paths`. `name` is the name used in the Nix store path for the created derivation. `paths` is a list of paths that will be symlinked. These paths can be to Nix store derivations or any other subdirectory contained within.
Here is an example:
```nix
# adds symlinks of hello and stack to current build and prints "links added"
symlinkJoin { name = "myexample"; paths = [ pkgs.hello pkgs.stack ]; postBuild = "echo links added"; }
```
This creates a derivation with a directory structure like the following:
```
/nix/store/sglsr5g079a5235hy29da3mq3hv8sjmm-myexample
|-- bin
| |-- hello -> /nix/store/qy93dp4a3rqyn2mz63fbxjg228hffwyw-hello-2.10/bin/hello
| `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/bin/stack
`-- share
|-- bash-completion
| `-- completions
| `-- stack -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/bash-completion/completions/stack
|-- fish
| `-- vendor_completions.d
| `-- stack.fish -> /nix/store/6lzdpxshx78281vy056lbk553ijsdr44-stack-2.1.3.1/share/fish/vendor_completions.d/stack.fish
...
```
## `writeReferencesToFile` {#trivial-builder-writeReferencesToFile}
@ -95,5 +219,5 @@ produces an output path `/nix/store/<hash>-runtime-references` containing
/nix/store/<hash>-hello-2.10
```
but none of `hello`'s dependencies, because those are not referenced directly
but none of `hello`'s dependencies because those are not referenced directly
by `hi`'s output.

@ -6,7 +6,7 @@
- Do not use tab characters, i.e. configure your editor to use soft tabs. For instance, use `(setq-default indent-tabs-mode nil)` in Emacs. Everybody has different tab settings so it’s asking for trouble.
- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in <xref linkend="sec-package-naming"/>.
- Use `lowerCamelCase` for variable names, not `UpperCamelCase`. Note, this rule does not apply to package attribute names, which instead follow the rules in [](#sec-package-naming).
- Function calls with attribute set arguments are written as
@ -181,10 +181,22 @@
rev = "${version}";
```
- Arguments should be listed in the order they are used, with the exception of `lib`, which always goes first.
- Building lists conditionally _should_ be done with `lib.optional(s)` instead of using `if cond then [ ... ] else null` or `if cond then [ ... ] else [ ]`.
```nix
buildInputs = lib.optional stdenv.isDarwin iconv;
```
instead of
- The top-level `lib` must be used in the master and 21.05 branch over its alias `stdenv.lib` as it now causes evaluation errors when aliases are disabled which is the case for ofborg.
`lib` is unrelated to `stdenv`, and so `stdenv.lib` should only be used as a convenience alias when developing locally to avoid having to modify the function inputs just to test something out.
```nix
buildInputs = if stdenv.isDarwin then [ iconv ] else null;
```
As an exception, an explicit conditional expression with null can be used when fixing a important bug without triggering a mass rebuild.
If this is done a follow up pull request _should_ be created to change the code to `lib.optional(s)`.
- Arguments should be listed in the order they are used, with the exception of `lib`, which always goes first.
## Package naming {#sec-package-naming}
@ -202,17 +214,17 @@ Most of the time, these are the same. For instance, the package `e2fsprogs` has
There are a few naming guidelines:
- The `name` attribute _should_ be identical to the upstream package name.
- The `pname` attribute _should_ be identical to the upstream package name.
- The `name` attribute _must not_ contain uppercase letters — e.g., `"mplayer-1.0rc2"` instead of `"MPlayer-1.0rc2"`.
- The `pname` and the `version` attribute _must not_ contain uppercase letters — e.g., `"mplayer" instead of `"MPlayer"`.
- The version part of the `name` attribute _must_ start with a digit (following a dash) — e.g., `"hello-0.3.1rc2"`.
- The `version` attribute _must_ start with a digit e.g`"0.3.1rc2".
- If a package is not a release but a commit from a repository, then the version part of the name _must_ be the date of that (fetched) commit. The date _must_ be in `"YYYY-MM-DD"` format. Also append `"unstable"` to the name - e.g., `"pkgname-unstable-2014-09-23"`.
- If a package is not a release but a commit from a repository, then the `version` attribute _must_ be the date of that (fetched) commit. The date _must_ be in `"unstable-YYYY-MM-DD"` format.
- Dashes in the package name _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
- Dashes in the package `pname` _should_ be preserved in new variable names, rather than converted to underscores or camel cased — e.g., `http-parser` instead of `http_parser` or `httpParser`. The hyphenated style is preferred in all three package names.
- If there are multiple versions of a package, this _should_ be reflected in the variable names in `all-packages.nix`, e.g. `json-c-0-9` and `json-c-0-11`. If there is an obvious “default” version, make an attribute like `json-c = json-c-0-9;`. See also <xref linkend="sec-versioning" />
- If there are multiple versions of a package, this _should_ be reflected in the variable names in `all-packages.nix`, e.g. `json-c_0_9` and `json-c_0_11`. If there is an obvious “default” version, make an attribute like `json-c = json-c_0_9;`. See also [](#sec-versioning)
## File naming and organisation {#sec-organisation}
@ -465,9 +477,9 @@ Preferred source hash type is sha256. There are several ways to get it.
For package updates it is enough to change one symbol to make hash fake. For new packages, you can use `lib.fakeSha256`, `lib.fakeSha512` or any other fake hash.
This is last resort method when reconstructing source URL is non-trivial and `nix-prefetch-url -A` isn't applicable (for example, [one of `kodi` dependencies](https://github.com/NixOS/nixpkgs/blob/d2ab091dd308b99e4912b805a5eb088dd536adb9/pkgs/applications/video/kodi/default.nix#L73")). The easiest way then would be replace hash with a fake one and rebuild. Nix build will fail and error message will contain desired hash.
This is last resort method when reconstructing source URL is non-trivial and `nix-prefetch-url -A` isnt applicable (for example, [one of `kodi` dependencies](https://github.com/NixOS/nixpkgs/blob/d2ab091dd308b99e4912b805a5eb088dd536adb9/pkgs/applications/video/kodi/default.nix#L73)). The easiest way then would be replace hash with a fake one and rebuild. Nix build will fail and error message will contain desired hash.
::: warning
::: {.warning}
This method has security problems. Check below for details.
:::
@ -523,15 +535,16 @@ If you do need to do create this sort of patch file, one way to do so is with gi
4. Use git to create a diff, and pipe the output to a patch file:
```ShellSession
$ git diff > nixpkgs/pkgs/the/package/0001-changes.patch
$ git diff -a > nixpkgs/pkgs/the/package/0001-changes.patch
```
If a patch is available online but does not cleanly apply, it can be modified in some fixed ways by using additional optional arguments for `fetchpatch`:
- `relative`: Similar to using `git-diff`'s `--relative` flag, only keep changes inside the specified directory, making paths relative to it.
- `stripLen`: Remove the first `stripLen` components of pathnames in the patch.
- `extraPrefix`: Prefix pathnames by this string.
- `excludes`: Exclude files matching this pattern.
- `includes`: Include only files matching this pattern.
- `excludes`: Exclude files matching these patterns (applies after the above arguments).
- `includes`: Include only files matching these patterns (applies after the above arguments).
- `revert`: Revert the patch.
Note that because the checksum is computed after applying these effects, using or modifying these arguments will have no effect unless the `sha256` argument is changed as well.
@ -540,9 +553,34 @@ Note that because the checksum is computed after applying these effects, using o
Tests are important to ensure quality and make reviews and automatic updates easy.
Nix package tests are a lightweight alternative to [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests). They can be used to create simple integration tests for packages while the module tests are used to test services or programs with a graphical user interface on a NixOS VM. Unittests that are included in the source code of a package should be executed in the `checkPhase`.
The following types of tests exists:
* [NixOS **module tests**](https://nixos.org/manual/nixos/stable/#sec-nixos-tests), which spawn one or more NixOS VMs. They exercise both NixOS modules and the packaged programs used within them. For example, a NixOS module test can start a web server VM running the `nginx` module, and a client VM running `curl` or a graphical `firefox`, and test that they can talk to each other and display the correct content.
* Nix **package tests** are a lightweight alternative to NixOS module tests. They should be used to create simple integration tests for packages, but cannot test NixOS services, and some programs with graphical user interfaces may also be difficult to test with them.
* The **`checkPhase` of a package**, which should execute the unit tests that are included in the source code of a package.
Here in the nixpkgs manual we describe mostly _package tests_; for _module tests_ head over to the corresponding [section in the NixOS manual](https://nixos.org/manual/nixos/stable/#sec-nixos-tests).
### Writing package tests {#ssec-package-tests-writing}
### Writing inline package tests {#ssec-inline-package-tests-writing}
For very simple tests, they can be written inline:
```nix
{ …, yq-go }:
buildGoModule rec {
passthru.tests = {
simple = runCommand "${pname}-test" {} ''
echo "test: 1" | ${yq-go}/bin/yq eval -j > $out
[ "$(cat $out | tr -d $'\n ')" = '{"test":1}' ]
'';
};
}
```
### Writing larger package tests {#ssec-package-tests-writing}
This is an example using the `phoronix-test-suite` package with the current best practices.
@ -571,7 +609,7 @@ let
inherit (phoronix-test-suite) pname version;
in
runCommand "${pname}-tests" { meta.timeout = 3; }
runCommand "${pname}-tests" { meta.timeout = 60; }
''
# automatic initial setup to prevent interactive questions
${phoronix-test-suite}/bin/phoronix-test-suite enterprise-setup >/dev/null
@ -605,3 +643,23 @@ Here are examples of package tests:
- [Spacy annotation test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/python-modules/spacy/annotation-test/default.nix)
- [Libtorch test](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/science/math/libtorch/test/default.nix)
- [Multiple tests for nanopb](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/nanopb/default.nix)
### Linking NixOS module tests to a package {#ssec-nixos-tests-linking}
Like [package tests](#ssec-package-tests-writing) as shown above, [NixOS module tests](https://nixos.org/manual/nixos/stable/#sec-nixos-tests) can also be linked to a package, so that the tests can be easily run when changing the related package.
For example, assuming we're packaging `nginx`, we can link its module test via `passthru.tests`:
```nix
{ stdenv, lib, nixosTests }:
stdenv.mkDerivation {
...
passthru.tests = {
nginx = nixosTests.nginx;
};
...
}
```

@ -1,6 +1,6 @@
# Contributing to this documentation {#chap-contributing}
The DocBook sources of the Nixpkgs manual are in the [doc](https://github.com/NixOS/nixpkgs/tree/master/doc) subdirectory of the Nixpkgs repository.
The sources of the Nixpkgs manual are in the [doc](https://github.com/NixOS/nixpkgs/tree/master/doc) subdirectory of the Nixpkgs repository. The manual is still partially written in DocBook but it is progressively being converted to [Markdown](#sec-contributing-markup).
You can quickly check your edits with `make`:
@ -22,3 +22,85 @@ $ nix-shell
```
If the build succeeds, the manual will be in `./result/share/doc/nixpkgs/manual.html`.
## Syntax {#sec-contributing-markup}
As per [RFC 0072](https://github.com/NixOS/rfcs/pull/72), all new documentation content should be written in [CommonMark](https://commonmark.org/) Markdown dialect.
Additionally, the following syntax extensions are currently used:
- []{#ssec-contributing-markup-anchors}
Explicitly defined **anchors** on headings, to allow linking to sections. These should be always used, to ensure the anchors can be linked even when the heading text changes, and to prevent conflicts between [automatically assigned identifiers](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/auto_identifiers.md).
It uses the widely compatible [header attributes](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/attributes.md) syntax:
```markdown
## Syntax {#sec-contributing-markup}
```
- []{#ssec-contributing-markup-anchors-inline}
**Inline anchors**, which allow linking arbitrary place in the text (e.g. individual list items, sentences…).
They are defined using a hybrid of the link syntax with the attributes syntax known from headings, called [bracketed spans](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/bracketed_spans.md):
```markdown
- []{#ssec-gnome-hooks-glib} `glib` setup hook will populate `GSETTINGS_SCHEMAS_PATH` and then `wrapGAppsHook` will prepend it to `XDG_DATA_DIRS`.
```
- []{#ssec-contributing-markup-automatic-links}
If you **omit a link text** for a link pointing to a section, the text will be substituted automatically. For example, `[](#chap-contributing)` will result in [](#chap-contributing).
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
- []{#ssec-contributing-markup-inline-roles}
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``, which will turn into {manpage}`nix.conf(5)`.
The references will turn into links when a mapping exists in {file}`doc/build-aux/pandoc-filters/link-unix-man-references.lua`.
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
- []{#ssec-contributing-markup-admonitions}
**Admonitions**, set off from the text to bring attention to something.
It uses pandoc’s [fenced `div`s syntax](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/fenced_divs.md):
```markdown
::: {.warning}
This is a warning
:::
```
which renders as
> ::: {.warning}
> This is a warning.
> :::
The following are supported:
- [`caution`](https://tdg.docbook.org/tdg/5.0/caution.html)
- [`important`](https://tdg.docbook.org/tdg/5.0/important.html)
- [`note`](https://tdg.docbook.org/tdg/5.0/note.html)
- [`tip`](https://tdg.docbook.org/tdg/5.0/tip.html)
- [`warning`](https://tdg.docbook.org/tdg/5.0/warning.html)
- []{#ssec-contributing-markup-definition-lists}
[**Definition lists**](https://github.com/jgm/commonmark-hs/blob/master/commonmark-extensions/test/definition_lists.md), for defining a group of terms:
```markdown
pear
: green or yellow bulbous fruit
watermelon
: green fruit with red flesh
```
which renders as
> pear
> : green or yellow bulbous fruit
>
> watermelon
> : green fruit with red flesh
For contributing to the legacy parts, please see [DocBook: The Definitive Guide](https://tdg.docbook.org/) or the [DocBook rocks! primer](https://web.archive.org/web/20200816233747/https://docbook.rocks/).

@ -9,7 +9,7 @@ To add a package to Nixpkgs:
$ cd nixpkgs
```
2. Find a good place in the Nixpkgs tree to add the Nix expression for your package. For instance, a library package typically goes into `pkgs/development/libraries/pkgname`, while a web browser goes into `pkgs/applications/networking/browsers/pkgname`. See <xref linkend="sec-organisation" /> for some hints on the tree organisation. Create a directory for your package, e.g.
2. Find a good place in the Nixpkgs tree to add the Nix expression for your package. For instance, a library package typically goes into `pkgs/development/libraries/pkgname`, while a web browser goes into `pkgs/applications/networking/browsers/pkgname`. See [](#sec-organisation) for some hints on the tree organisation. Create a directory for your package, e.g.
```ShellSession
$ mkdir pkgs/development/libraries/libfoo

@ -1,6 +1,6 @@
# Reviewing contributions {#chap-reviewing-contributions}
::: warning
::: {.warning}
The following section is a draft, and the policy for reviewing is still being discussed in issues such as [#11166](https://github.com/NixOS/nixpkgs/issues/11166) and [#20836](https://github.com/NixOS/nixpkgs/issues/20836).
:::
@ -35,15 +35,18 @@ Reviewing process:
- Building the package locally.
- pull requests are often targeted to the master or staging branch, and building the pull request locally when it is submitted can trigger many source builds.
- It is possible to rebase the changes on nixos-unstable or nixpkgs-unstable for easier review by running the following commands from a nixpkgs clone.
```ShellSession
$ git fetch origin nixos-unstable
$ git fetch origin pull/PRNUMBER/head
$ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD
```
- The first command fetches the nixos-unstable branch.
- The second command fetches the pull request changes, `PRNUMBER` is the number at the end of the pull request title and `BASEBRANCH` the base branch of the pull request.
- The third command rebases the pull request changes to the nixos-unstable branch.
- The [nixpkgs-review](https://github.com/Mic92/nixpkgs-review) tool can be used to review a pull request content in a single command. `PRNUMBER` should be replaced by the number at the end of the pull request title. You can also provide the full github pull request url.
```ShellSession
$ nix-shell -p nixpkgs-review --run "nixpkgs-review pr PRNUMBER"
```
@ -100,7 +103,8 @@ Sample template for a new package review is provided below.
- [ ] `meta.maintainers` is set
- [ ] build time only dependencies are declared in `nativeBuildInputs`
- [ ] source is fetched using the appropriate function
- [ ] phases are respected
- [ ] the list of `phases` is not overridden
- [ ] when a phase (like `installPhase`) is overridden it starts with `runHook preInstall` and ends with `runHook postInstall`.
- [ ] patches that are remotely available are fetched with `fetchpatch`
##### Possible improvements
@ -118,10 +122,10 @@ Reviewing process:
- [CODEOWNERS](https://help.github.com/articles/about-codeowners/) will make GitHub notify users based on the submitted changes, but it can happen that it misses some of the package maintainers.
- Ensure that the module tests, if any, are succeeding.
- Ensure that the introduced options are correct.
- Type should be appropriate (string related types differs in their merging capabilities, `optionSet` and `string` types are deprecated).
- Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated).
- Description, default and example should be provided.
- Ensure that option changes are backward compatible.
- `mkRenamedOptionModule` and `mkAliasOptionModule` functions provide way to make option changes backward compatible.
- `mkRenamedOptionModuleWith` provides a way to make option changes backward compatible.
- Ensure that removed options are declared with `mkRemovedOptionModule`
- Ensure that changes that are not backward compatible are mentioned in release notes.
- Ensure that documentations affected by the change is updated.
@ -153,7 +157,7 @@ Reviewing process:
- Ensure that the module tests, if any, are succeeding.
- Ensure that the introduced options are correct.
- Type should be appropriate (string related types differs in their merging capabilities, `optionSet` and `string` types are deprecated).
- Type should be appropriate (string related types differs in their merging capabilities, `loaOf` and `string` types are deprecated).
- Description, default and example should be provided.
- Ensure that module `meta` field is present
- Maintainers should be declared in `meta.maintainers`.

@ -43,13 +43,13 @@
- nixpkgs:
- update pkg
- `nix-env -i pkg-name -f <path to your local nixpkgs folder>`
- `nix-env -iA pkg-attribute-name -f <path to your local nixpkgs folder>`
- add pkg
- Make sure it’s in `pkgs/top-level/all-packages.nix`
- `nix-env -i pkg-name -f <path to your local nixpkgs folder>`
- `nix-env -iA pkg-attribute-name -f <path to your local nixpkgs folder>`
- _If you don’t want to install pkg in you profile_.
- `nix-build -A pkg-attribute-name <path to your local nixpkgs folder>/default.nix` and check results in the folder `result`. It will appear in the same directory where you did `nix-build`.
- If you did `nix-env -i pkg-name` you can do `nix-env -e pkg-name` to uninstall it from your system.
- `nix-build -A pkg-attribute-name <path to your local nixpkgs folder>` and check results in the folder `result`. It will appear in the same directory where you did `nix-build`.
- If you installed your package with `nix-env`, you can run `nix-env -e pkg-name` where `pkg-name` is as reported by `nix-env -q` to uninstall it from your system.
- NixOS and its modules:
- You can add new module to your NixOS configuration file (usually it’s `/etc/nixos/configuration.nix`). And do `sudo nixos-rebuild test -I nixpkgs=<path to your local nixpkgs folder> --fast`.
@ -62,7 +62,7 @@
- Push your changes to your fork of nixpkgs.
- Create the pull request
- Follow [the contribution guidelines](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md#submitting-changes).
- Follow [the contribution guidelines](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md#submitting-changes).
## Submitting security fixes {#submitting-changes-submitting-security-fixes}
@ -71,6 +71,7 @@ Security fixes are submitted in the same way as other changes and thus the same
- If a new version fixing the vulnerability has been released, update the package;
- If the security fix comes in the form of a patch and a CVE is available, then add the patch to the Nixpkgs tree, and apply it to the package.
The name of the patch should be the CVE identifier, so e.g. `CVE-2019-13636.patch`; If a patch is fetched the name needs to be set as well, e.g.:
```nix
(fetchpatch {
name = "CVE-2019-11068.patch";
@ -89,17 +90,18 @@ There is currently no policy when to remove a package.
Before removing a package, one should try to find a new maintainer or fix smaller issues first.
### Steps to remove a package from Nixpkgs
### Steps to remove a package from Nixpkgs {#steps-to-remove-a-package-from-nixpkgs}
We use jbidwatcher as an example for a discontinued project here.
1. Have Nixpkgs checked out locally and up to date.
1. Create a new branch for your change, e.g. `git checkout -b jbidwatcher`
1. Remove the actual package including its directory, e.g. `rm -rf pkgs/applications/misc/jbidwatcher`
1. Remove the actual package including its directory, e.g. `git rm -rf pkgs/applications/misc/jbidwatcher`
1. Remove the package from the list of all packages (`pkgs/top-level/all-packages.nix`).
1. Add an alias for the package name in `pkgs/top-level/aliases.nix` (There is also `pkgs/misc/vim-plugins/aliases.nix`. Package sets typically do not have aliases, so we can't add them there.)
1. Add an alias for the package name in `pkgs/top-level/aliases.nix` (There is also `pkgs/applications/editors/vim/plugins/aliases.nix`. Package sets typically do not have aliases, so we can't add them there.)
For example in this case:
```
jbidwatcher = throw "jbidwatcher was discontinued in march 2021"; # added 2021-03-15
```
@ -191,7 +193,7 @@ It’s important to test any executables generated by a build when you change or
### Meets Nixpkgs contribution standards {#submitting-changes-contribution-standards}
The last checkbox is fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). The contributing document has detailed information on standards the Nix community has for commit messages, reviews, licensing of contributions you make to the project, etc\... Everyone should read and understand the standards the community has for contributing before submitting a pull request.
The last checkbox is fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/CONTRIBUTING.md). The contributing document has detailed information on standards the Nix community has for commit messages, reviews, licensing of contributions you make to the project, etc\... Everyone should read and understand the standards the community has for contributing before submitting a pull request.
## Hotfixing pull requests {#submitting-changes-hotfixing-pull-requests}
@ -225,7 +227,7 @@ digraph {
}
```
[This GitHub Action](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/merge-staging.yml) brings changes from `master` to `staging-next` and from `staging-next` to `staging` every 6 hours.
[This GitHub Action](https://github.com/NixOS/nixpkgs/blob/master/.github/workflows/periodic-merge-6h.yml) brings changes from `master` to `staging-next` and from `staging-next` to `staging` every 6 hours.
### Master branch {#submitting-changes-master-branch}
@ -234,19 +236,31 @@ The `master` branch is the main development branch. It should only see non-break
### Staging branch {#submitting-changes-staging-branch}
The `staging` branch is a development branch where mass-rebuilds go. It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
The `staging` branch is a development branch where mass-rebuilds go. Mass rebuilds are commits that cause rebuilds for many packages, like more than 500 (or perhaps, if it's 'light' packages, 1000). It should only see non-breaking mass-rebuild commits. That means it is not to be used for testing, and changes must have been well tested already. If the branch is already in a broken state, please refrain from adding extra new breakages.
### Staging-next branch {#submitting-changes-staging-next-branch}
The `staging-next` branch is for stabilizing mass-rebuilds submitted to the `staging` branch prior to merging them into `master`. Mass-rebuilds should go via the `staging` branch. It should only see non-breaking commits that are fixing issues blocking it from being merged into the `master ` branch.
The `staging-next` branch is for stabilizing mass-rebuilds submitted to the `staging` branch prior to merging them into `master`. Mass-rebuilds must go via the `staging` branch. It must only see non-breaking commits that are fixing issues blocking it from being merged into the `master ` branch.
If the branch is already in a broken state, please refrain from adding extra new breakages. Stabilize it for a few days and then merge into master.
### Stable release branches {#submitting-changes-stable-release-branches}
For cherry-picking a commit to a stable release branch (“backporting”), use `git cherry-pick -x <original commit>` so that the original commit id is included in the commit.
The same staging workflow applies to stable release branches, but the main branch is called `release-*` instead of `master`.
Example branch names: `release-21.11`, `staging-21.11`, `staging-next-21.11`.
Most changes added to the stable release branches are cherry-picked (“backported”) from the `master` and staging branches.
#### Automatically backporting a Pull Request {#submitting-changes-stable-release-branches-automatic-backports}
Assign label `backport <branch>` (e.g. `backport release-21.11`) to the PR and a backport PR is automatically created after the PR is merged.
Add a reason for the backport by using `git cherry-pick -xe <original commit>` instead when it is not obvious from the original commit message. It is not needed when it's a minor version update that includes security and bug fixes but don't add new features or when the commit fixes an otherwise broken package.
#### Manually backporting changes {#submitting-changes-stable-release-branches-manual-backports}
Cherry-pick changes via `git cherry-pick -x <original commit>` so that the original commit id is included in the commit message.
Add a reason for the backport when it is not obvious from the original commit message. You can do this by cherry picking with `git cherry-pick -xe <original commit>`, which allows editing the commit message. This is not needed for minor version updates that include security and bug fixes but don't add new features or when the commit fixes an otherwise broken package.
Here is an example of a cherry-picked commit message with good reason description:
@ -265,3 +279,14 @@ Other examples of reasons are:
- Previously the build would fail due to, e.g., `getaddrinfo` not being defined
- The previous download links were all broken
- Crash when starting on some X11 systems
#### Acceptable backport criteria
The stable branch does have some changes which cannot be backported. Most notable are breaking changes. The desire is to have stable users be uninterrupted when updating packages.
However, many changes are able to be backported, including:
- New Packages / Modules
- Security / Patch updates
- Version updates which include new functionality (but no breaking changes)
- Services which require a client to be up-to-date regardless. (E.g. `spotify`, `steam`, or `discord`)
- Security critical applications (E.g. `firefox`)

@ -23,6 +23,14 @@ let
<xsl:import href="${./parameters.xml}"/>
</xsl:stylesheet>
'';
# NB: This file describes the Nixpkgs manual, which happens to use module
# docs infra originally developed for NixOS.
optionsDoc = pkgs.nixosOptionsDoc {
inherit (pkgs.lib.evalModules { modules = [ ../../pkgs/top-level/config.nix ]; }) options;
documentType = "none";
};
in pkgs.runCommand "doc-support" {}
''
mkdir result
@ -30,6 +38,7 @@ in pkgs.runCommand "doc-support" {}
cd result
ln -s ${locationsXml} ./function-locations.xml
ln -s ${functionDocs} ./function-docs
ln -s ${optionsDoc.optionsDocBook} ./config-options.docbook.xml
ln -s ${pkgs.docbook5}/xml/rng/docbook/docbook.rng ./docbook.rng
ln -s ${pkgs.docbook_xsl_ns}/xml/xsl ./xsl

@ -22,5 +22,6 @@ with pkgs; stdenv.mkDerivation {
docgen lists 'List manipulation functions'
docgen debug 'Debugging functions'
docgen options 'NixOS / nixpkgs option handling'
docgen sources 'Source filtering functions'
'';
}

@ -7,8 +7,8 @@
The nixpkgs repository has several utility functions to manipulate Nix expressions.
</para>
<xi:include href="functions/library.xml" />
<xi:include href="functions/generators.xml" />
<xi:include href="functions/debug.xml" />
<xi:include href="functions/prefer-remote-fetch.xml" />
<xi:include href="functions/nix-gitignore.xml" />
<xi:include href="functions/generators.section.xml" />
<xi:include href="functions/debug.section.xml" />
<xi:include href="functions/prefer-remote-fetch.section.xml" />
<xi:include href="functions/nix-gitignore.section.xml" />
</chapter>

@ -0,0 +1,5 @@
# Debugging Nix Expressions {#sec-debug}
Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions.
In the `lib/debug.nix` file you will find a number of functions that help (pretty-)printing values while evaluation is running. You can even specify how deep these values should be printed recursively, and transform them on the fly. Please consult the docstrings in `lib/debug.nix` for usage information.

@ -1,14 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-debug">
<title>Debugging Nix Expressions</title>
<para>
Nix is a unityped, dynamic language, this means every value can potentially appear anywhere. Since it is also non-strict, evaluation order and what ultimately is evaluated might surprise you. Therefore it is important to be able to debug nix expressions.
</para>
<para>
In the <literal>lib/debug.nix</literal> file you will find a number of functions that help (pretty-)printing values while evaluation is runnnig. You can even specify how deep these values should be printed recursively, and transform them on the fly. Please consult the docstrings in <literal>lib/debug.nix</literal> for usage information.
</para>
</section>

@ -0,0 +1,56 @@
# Generators {#sec-generators}
Generators are functions that create file formats from nix data structures, e.g. for configuration files. There are generators available for: `INI`, `JSON` and `YAML`
All generators follow a similar call interface: `generatorName configFunctions data`, where `configFunctions` is an attrset of user-defined functions that format nested parts of the content. They each have common defaults, so often they do not need to be set manually. An example is `mkSectionName ? (name: libStr.escape [ "[" "]" ] name)` from the `INI` generator. It receives the name of a section and sanitizes it. The default `mkSectionName` escapes `[` and `]` with a backslash.
Generators can be fine-tuned to produce exactly the file format required by your application/service. One example is an INI-file format which uses `: ` as separator, the strings `"yes"`/`"no"` as boolean values and requires all string values to be quoted:
```nix
with lib;
let
customToINI = generators.toINI {
# specifies how to format a key/value pair
mkKeyValue = generators.mkKeyValueDefault {
# specifies the generated string for a subset of nix values
mkValueString = v:
if v == true then ''"yes"''
else if v == false then ''"no"''
else if isString v then ''"${v}"''
# and delegats all other values to the default generator
else generators.mkValueStringDefault {} v;
} ":";
};
# the INI file can now be given as plain old nix values
in customToINI {
main = {
pushinfo = true;
autopush = false;
host = "localhost";
port = 42;
};
mergetool = {
merge = "diff3";
};
}
```
This will produce the following INI file as nix string:
```INI
[main]
autopush:"no"
host:"localhost"
port:42
pushinfo:"yes"
str\:ange:"very::strange"
[mergetool]
merge:"diff3"
```
::: {.note}
Nix store paths can be converted to strings by enclosing a derivation attribute like so: `"${drv}"`.
:::
Detailed documentation for each generator can be found in `lib/generators.nix`.

@ -1,74 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-generators">
<title>Generators</title>
<para>
Generators are functions that create file formats from nix data structures, e.g. for configuration files. There are generators available for: <literal>INI</literal>, <literal>JSON</literal> and <literal>YAML</literal>
</para>
<para>
All generators follow a similar call interface: <code>generatorName configFunctions data</code>, where <literal>configFunctions</literal> is an attrset of user-defined functions that format nested parts of the content. They each have common defaults, so often they do not need to be set manually. An example is <code>mkSectionName ? (name: libStr.escape [ "[" "]" ] name)</code> from the <literal>INI</literal> generator. It receives the name of a section and sanitizes it. The default <literal>mkSectionName</literal> escapes <literal>[</literal> and <literal>]</literal> with a backslash.
</para>
<para>
Generators can be fine-tuned to produce exactly the file format required by your application/service. One example is an INI-file format which uses <literal>: </literal> as separator, the strings <literal>"yes"</literal>/<literal>"no"</literal> as boolean values and requires all string values to be quoted:
</para>
<programlisting>
with lib;
let
customToINI = generators.toINI {
# specifies how to format a key/value pair
mkKeyValue = generators.mkKeyValueDefault {
# specifies the generated string for a subset of nix values
mkValueString = v:
if v == true then ''"yes"''
else if v == false then ''"no"''
else if isString v then ''"${v}"''
# and delegats all other values to the default generator
else generators.mkValueStringDefault {} v;
} ":";
};
# the INI file can now be given as plain old nix values
in customToINI {
main = {
pushinfo = true;
autopush = false;
host = "localhost";
port = 42;
};
mergetool = {
merge = "diff3";
};
}
</programlisting>
<para>
This will produce the following INI file as nix string:
</para>
<programlisting>
[main]
autopush:"no"
host:"localhost"
port:42
pushinfo:"yes"
str\:ange:"very::strange"
[mergetool]
merge:"diff3"
</programlisting>
<note>
<para>
Nix store paths can be converted to strings by enclosing a derivation attribute like so: <code>"${drv}"</code>.
</para>
</note>
<para>
Detailed documentation for each generator can be found in <literal>lib/generators.nix</literal>.
</para>
</section>

@ -25,4 +25,6 @@
<xi:include href="./library/generated/debug.xml" />
<xi:include href="./library/generated/options.xml" />
<xi:include href="./library/generated/sources.xml" />
</section>

@ -772,7 +772,7 @@ nameValuePair "some" 6
<title>Modifying each value of an attribute set</title>
<programlisting><![CDATA[
lib.attrsets.mapAttrs
(name: value: name + "-" value)
(name: value: name + "-" + value)
{ x = "foo"; y = "bar"; }
=> { x = "x-foo"; y = "y-bar"; }
]]></programlisting>
@ -1474,7 +1474,7 @@ lib.attrsets.zipAttrsWith
<section xml:id="function-library-lib.attrsets.zipAttrs">
<title><function>lib.attrsets.zipAttrs</function></title>
<subtitle><literal>zipAttrsWith :: [ AttrSet ] -> AttrSet</literal>
<subtitle><literal>zipAttrs :: [ AttrSet ] -> AttrSet</literal>
</subtitle>
<xi:include href="./locations.xml" xpointer="lib.attrsets.zipAttrs" />

@ -0,0 +1,49 @@
# pkgs.nix-gitignore {#sec-pkgs-nix-gitignore}
`pkgs.nix-gitignore` is a function that acts similarly to `builtins.filterSource` but also allows filtering with the help of the gitignore format.
## Usage {#sec-pkgs-nix-gitignore-usage}
`pkgs.nix-gitignore` exports a number of functions, but you\'ll most likely need either `gitignoreSource` or `gitignoreSourcePure`. As their first argument, they both accept either 1. a file with gitignore lines or 2. a string with gitignore lines, or 3. a list of either of the two. They will be concatenated into a single big string.
```nix
{ pkgs ? import <nixpkgs> {} }:
nix-gitignore.gitignoreSource [] ./source
# Simplest version
nix-gitignore.gitignoreSource "supplemental-ignores\n" ./source
# This one reads the ./source/.gitignore and concats the auxiliary ignores
nix-gitignore.gitignoreSourcePure "ignore-this\nignore-that\n" ./source
# Use this string as gitignore, don't read ./source/.gitignore.
nix-gitignore.gitignoreSourcePure ["ignore-this\nignore-that\n", ~/.gitignore] ./source
# It also accepts a list (of strings and paths) that will be concatenated
# once the paths are turned to strings via readFile.
```
These functions are derived from the `Filter` functions by setting the first filter argument to `(_: _: true)`:
```nix
gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true);
gitignoreSource = gitignoreFilterSource (_: _: true);
```
Those filter functions accept the same arguments the `builtins.filterSource` function would pass to its filters, thus `fn: gitignoreFilterSourcePure fn ""` should be extensionally equivalent to `filterSource`. The file is blacklisted if it\'s blacklisted by either your filter or the gitignoreFilter.
If you want to make your own filter from scratch, you may use
```nix
gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
```
## gitignore files in subdirectories {#sec-pkgs-nix-gitignore-usage-recursive}
If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function:
```nix
gitignoreFilterRecursiveSource = filter: patterns: root:
# OR
gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true);
```

@ -1,70 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="sec-pkgs-nix-gitignore">
<title>pkgs.nix-gitignore</title>
<para>
<function>pkgs.nix-gitignore</function> is a function that acts similarly to <literal>builtins.filterSource</literal> but also allows filtering with the help of the gitignore format.
</para>
<section xml:id="sec-pkgs-nix-gitignore-usage">
<title>Usage</title>
<para>
<literal>pkgs.nix-gitignore</literal> exports a number of functions, but you'll most likely need either <literal>gitignoreSource</literal> or <literal>gitignoreSourcePure</literal>. As their first argument, they both accept either 1. a file with gitignore lines or 2. a string with gitignore lines, or 3. a list of either of the two. They will be concatenated into a single big string.
</para>
<programlisting><![CDATA[
{ pkgs ? import <nixpkgs> {} }:
nix-gitignore.gitignoreSource [] ./source
# Simplest version
nix-gitignore.gitignoreSource "supplemental-ignores\n" ./source
# This one reads the ./source/.gitignore and concats the auxiliary ignores
nix-gitignore.gitignoreSourcePure "ignore-this\nignore-that\n" ./source
# Use this string as gitignore, don't read ./source/.gitignore.
nix-gitignore.gitignoreSourcePure ["ignore-this\nignore-that\n", ~/.gitignore] ./source
# It also accepts a list (of strings and paths) that will be concatenated
# once the paths are turned to strings via readFile.
]]></programlisting>
<para>
These functions are derived from the <literal>Filter</literal> functions by setting the first filter argument to <literal>(_: _: true)</literal>:
</para>
<programlisting><![CDATA[
gitignoreSourcePure = gitignoreFilterSourcePure (_: _: true);
gitignoreSource = gitignoreFilterSource (_: _: true);
]]></programlisting>
<para>
Those filter functions accept the same arguments the <literal>builtins.filterSource</literal> function would pass to its filters, thus <literal>fn: gitignoreFilterSourcePure fn ""</literal> should be extensionally equivalent to <literal>filterSource</literal>. The file is blacklisted iff it's blacklisted by either your filter or the gitignoreFilter.
</para>
<para>
If you want to make your own filter from scratch, you may use
</para>
<programlisting><![CDATA[
gitignoreFilter = ign: root: filterPattern (gitignoreToPatterns ign) root;
]]></programlisting>
</section>
<section xml:id="sec-pkgs-nix-gitignore-usage-recursive">
<title>gitignore files in subdirectories</title>
<para>
If you wish to use a filter that would search for .gitignore files in subdirectories, just like git does by default, use this function:
</para>
<programlisting><![CDATA[
gitignoreFilterRecursiveSource = filter: patterns: root:
# OR
gitignoreRecursiveSource = gitignoreFilterSourcePure (_: _: true);
]]></programlisting>
</section>
</section>

@ -0,0 +1,17 @@
# prefer-remote-fetch overlay {#sec-prefer-remote-fetch}
`prefer-remote-fetch` is an overlay that download sources on remote builder. This is useful when the evaluating machine has a slow upload while the builder can fetch faster directly from the source. To use it, put the following snippet as a new overlay:
```nix
self: super:
(super.prefer-remote-fetch self super)
```
A full configuration example for that sets the overlay up for your own account, could look like this
```ShellSession
$ mkdir ~/.config/nixpkgs/overlays/
$ cat > ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix <<EOF
self: super: super.prefer-remote-fetch self super
EOF
```

@ -1,21 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/xinclude"
xml:id="sec-prefer-remote-fetch">
<title>prefer-remote-fetch overlay</title>
<para>
<function>prefer-remote-fetch</function> is an overlay that download sources on remote builder. This is useful when the evaluating machine has a slow upload while the builder can fetch faster directly from the source. To use it, put the following snippet as a new overlay:
<programlisting>
self: super:
(super.prefer-remote-fetch self super)
</programlisting>
A full configuration example for that sets the overlay up for your own account, could look like this
<screen>
<prompt>$ </prompt>mkdir ~/.config/nixpkgs/overlays/
<prompt>$ </prompt>cat &gt; ~/.config/nixpkgs/overlays/prefer-remote-fetch.nix &lt;&lt;EOF
self: super: super.prefer-remote-fetch self super
EOF
</screen>
</para>
</section>

@ -0,0 +1,10 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
xml:id="chap-hooks">
<title>Hooks reference</title>
<para>
Nixpkgs has several hook packages that augment the stdenv phases.
</para>
<xi:include href="./postgresql-test-hook.section.xml" />
</chapter>

@ -0,0 +1,59 @@
# `postgresqlTestHook` {#sec-postgresqlTestHook}
This hook starts a PostgreSQL server during the `checkPhase`. Example:
```nix
{ stdenv, postgresql, postgresqlTestHook }:
stdenv.mkDerivation {
# ...
checkInputs = [
postgresql
postgresqlTestHook
];
}
```
If you use a custom `checkPhase`, remember to add the `runHook` calls:
```nix
checkPhase ''
runHook preCheck
# ... your tests
runHook postCheck
''
```
## Variables {#sec-postgresqlTestHook-variables}
The hook logic will read a number of variables and set them to a default value if unset or empty.
Exported variables:
- `PGDATA`: location of server files.
- `PGHOST`: location of UNIX domain socket directory; the default `host` in a connection string.
- `PGUSER`: user to create / log in with, default: `test_user`.
- `PGDATABASE`: database name, default: `test_db`.
Bash-only variables:
- `postgresqlTestUserOptions`: SQL options to use when creating the `$PGUSER` role, default: `LOGIN`.
- `postgresqlTestSetupSQL`: SQL commands to run as database administrator after startup, default: statements that create `$PGUSER` and `$PGDATABASE`.
- `postgresqlTestSetupCommands`: bash commands to run after database start, defaults to running `$postgresqlTestSetupSQL` as database administrator.
- `postgresqlEnableTCP`: set to `1` to enable TCP listening. Flaky; not recommended.
- `postgresqlStartCommands`: defaults to `pg_ctl start`.
## TCP and the Nix sandbox {#sec-postgresqlTestHook-tcp}
`postgresqlEnableTCP` relies on network sandboxing, which is not available on macOS and some custom Nix installations, resulting in flaky tests.
For this reason, it is disabled by default.
The preferred solution is to make the test suite use a UNIX domain socket connection. This is the default behavior when no `host` connection parameter is provided.
Some test suites hardcode a value for `host` though, so a patch may be required. If you can upstream the patch, you can make `host` default to the `PGHOST` environment variable when set. Otherwise, you can patch it locally to omit the `host` connection string parameter altogether.
::: {.note}
The error `libpq: failed (could not receive data from server: Connection refused` is generally an indication that the test suite is trying to connect through TCP.
:::

@ -1,6 +1,6 @@
# Agda {#agda}
## How to use Agda
## How to use Agda {#how-to-use-agda}
Agda is available as the [agda](https://search.nixos.org/packages?channel=unstable&show=agda&from=0&size=30&sort=relevance&query=agda)
package.
@ -43,6 +43,7 @@ agda.withPackages (p: [
```
You can also reference a GitHub repository
```nix
agda.withPackages (p: [
(p.standard-library.overrideAttrs (oldAttrs: {
@ -59,6 +60,7 @@ agda.withPackages (p: [
If you want to use a library not added to Nixpkgs, you can add a
dependency to a local library by calling `agdaPackages.mkDerivation`.
```nix
agda.withPackages (p: [
(p.mkDerivation {
@ -92,20 +94,21 @@ See [Building Agda Packages](#building-agda-packages) for more information on `m
Agda will not by default use these libraries. To tell Agda to use a library we have some options:
* Call `agda` with the library flag:
```ShellSession
$ agda -l standard-library -i . MyFile.agda
```
```ShellSession
$ agda -l standard-library -i . MyFile.agda
```
* Write a `my-library.agda-lib` file for the project you are working on which may look like:
```
name: my-library
include: .
depend: standard-library
```
```
name: my-library
include: .
depend: standard-library
```
* Create the file `~/.agda/defaults` and add any libraries you want to use by default.
More information can be found in the [official Agda documentation on library management](https://agda.readthedocs.io/en/v2.6.1/tools/package-system.html).
## Compiling Agda
## Compiling Agda {#compiling-agda}
Agda modules can be compiled using the GHC backend with the `--compile` flag. A version of `ghc` with `ieee754` is made available to the Agda program via the `--with-compiler` flag.
This can be overridden by a different version of `ghc` as follows:
@ -116,7 +119,8 @@ agda.withPackages {
}
```
## Writing Agda packages
## Writing Agda packages {#writing-agda-packages}
To write a nix derivation for an Agda library, first check that the library has a `*.agda-lib` file.
A derivation can then be written using `agdaPackages.mkDerivation`. This has similar arguments to `stdenv.mkDerivation` with the following additions:
@ -140,19 +144,37 @@ agdaPackages.mkDerivation {
}
```
### Building Agda packages
### Building Agda packages {#building-agda-packages}
The default build phase for `agdaPackages.mkDerivation` simply runs `agda` on the `Everything.agda` file.
If something else is needed to build the package (e.g. `make`) then the `buildPhase` should be overridden.
Additionally, a `preBuild` or `configurePhase` can be used if there are steps that need to be done prior to checking the `Everything.agda` file.
`agda` and the Agda libraries contained in `buildInputs` are made available during the build phase.
### Installing Agda packages
### Installing Agda packages {#installing-agda-packages}
The default install phase copies Agda source files, Agda interface files (`*.agdai`) and `*.agda-lib` files to the output directory.
This can be overridden.
By default, Agda sources are files ending on `.agda`, or literate Agda files ending on `.lagda`, `.lagda.tex`, `.lagda.org`, `.lagda.md`, `.lagda.rst`. The list of recognised Agda source extensions can be extended by setting the `extraExtensions` config variable.
## Adding Agda packages to Nixpkgs
## Maintaining the Agda package set on Nixpkgs {#maintaining-the-agda-package-set-on-nixpkgs}
We are aiming at providing all common Agda libraries as packages on `nixpkgs`,
and keeping them up to date.
Contributions and maintenance help is always appreciated,
but the maintenance effort is typically low since the Agda ecosystem is quite small.
The `nixpkgs` Agda package set tries to take up a role similar to that of [Stackage](https://www.stackage.org/) in the Haskell world.
It is a curated set of libraries that:
1. Always work together.
2. Are as up-to-date as possible.
While the Haskell ecosystem is huge, and Stackage is highly automatised,
the Agda package set is small and can (still) be maintained by hand.
### Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs}
To add an Agda package to `nixpkgs`, the derivation should be written to `pkgs/development/libraries/agda/${library-name}/` and an entry should be added to `pkgs/top-level/agda-packages.nix`. Here it is called in a scope with access to all other Agda libraries, so the top line of the `default.nix` can look like:
@ -182,6 +204,53 @@ mkDerivation {
'';
}
```
This library has a file called `.agda-lib`, and so we give an empty string to `libraryFile` as nothing precedes `.agda-lib` in the filename. This file contains `name: IAL-1.3`, and so we let `libraryName = "IAL-1.3"`. This library does not use an `Everything.agda` file and instead has a Makefile, so there is no need to set `everythingFile` and we set a custom `buildPhase`.
When writing an Agda package it is essential to make sure that no `.agda-lib` file gets added to the store as a single file (for example by using `writeText`). This causes Agda to think that the nix store is a Agda library and it will attempt to write to it whenever it typechecks something. See [https://github.com/agda/agda/issues/4613](https://github.com/agda/agda/issues/4613).
In the pull request adding this library,
you can test whether it builds correctly by writing in a comment:
```
@ofborg build agdaPackages.iowa-stdlib
```
### Maintaining Agda packages
As mentioned before, the aim is to have a compatible, and up-to-date package set.
These two conditions sometimes exclude each other:
For example, if we update `agdaPackages.standard-library` because there was an upstream release,
this will typically break many reverse dependencies,
i.e. downstream Agda libraries that depend on the standard library.
In `nixpkgs` we are typically among the first to notice this,
since we have build tests in place to check this.
In a pull request updating e.g. the standard library, you should write the following comment:
```
@ofborg build agdaPackages.standard-library.passthru.tests
```
This will build all reverse dependencies of the standard library,
for example `agdaPackages.agda-categories`, or `agdaPackages.generic`.
In some cases it is useful to build _all_ Agda packages.
This can be done with the following Github comment:
```
@ofborg build agda.passthru.tests.allPackages
```
Sometimes, the builds of the reverse dependencies fail because they have not yet been updated and released.
You should drop the maintainers a quick issue notifying them of the breakage,
citing the build error (which you can get from the ofborg logs).
If you are motivated, you might even send a pull request that fixes it.
Usually, the maintainers will answer within a week or two with a new release.
Bumping the version of that reverse dependency should be a further commit on your PR.
In the rare case that a new release is not to be expected within an acceptable time,
simply mark the broken package as broken by setting `meta.broken = true;`.
This will exclude it from the build test.
It can be added later when it is fixed,
and does not hinder the advancement of the whole package set in the meantime.

@ -3,8 +3,8 @@
The Android build environment provides three major features and a number of
supporting features.
Deploying an Android SDK installation with plugins
--------------------------------------------------
## Deploying an Android SDK installation with plugins {#deploying-an-android-sdk-installation-with-plugins}
The first use case is deploying the SDK with a desired set of plugins or subsets
of an SDK.
@ -136,8 +136,8 @@ in
androidComposition.platform-tools
```
Using predefined Android package compositions
---------------------------------------------
## Using predefined Android package compositions {#using-predefined-android-package-compositions}
In addition to composing an Android package set manually, it is also possible
to use a predefined composition that contains all basic packages for a specific
Android version, such as version 9.0 (API-level 28).
@ -159,12 +159,13 @@ with import <nixpkgs> {};
androidenv.androidPkgs_9_0.platform-tools
```
Building an Android application
-------------------------------
## Building an Android application {#building-an-android-application}
In addition to the SDK, it is also possible to build an Ant-based Android
project and automatically deploy all the Android plugins that a project
requires.
```nix
with import <nixpkgs> {};
@ -199,8 +200,8 @@ to build Android apps. An Android APK gets exposed as a build product and can be
installed on any Android device with a web browser by navigating to the build
result page.
Spawning emulator instances
---------------------------
## Spawning emulator instances {#spawning-emulator-instances}
For testing purposes, it can also be quite convenient to automatically generate
scripts that spawn emulator instances with all desired configuration settings.
@ -241,8 +242,8 @@ androidenv.emulateApp {
In addition to prebuilt APKs, you can also bind the APK parameter to a
`buildApp {}` function invocation shown in the previous example.
Notes on environment variables in Android projects
--------------------------------------------------
## Notes on environment variables in Android projects {#notes-on-environment-variables-in-android-projects}
* `ANDROID_SDK_ROOT` should point to the Android SDK. In your Nix expressions, this should be
`${androidComposition.androidsdk}/libexec/android-sdk`. Note that `ANDROID_HOME` is deprecated,
but if you rely on tools that need it, you can export it too.
@ -300,8 +301,8 @@ This shell.nix includes a shell hook that overwrites local.properties with the c
sdk.dir and ndk.dir values. This will ensure that the SDK and NDK directories will
both be correct when you run Android Studio inside nix-shell.
Notes on improving build.gradle compatibility
---------------------------------------------
## Notes on improving build.gradle compatibility {#notes-on-improving-build.gradle-compatibility}
Ensure that your buildToolsVersion and ndkVersion match what is declared in androidenv.
If you are using cmake, make sure its declared version is correct too.
@ -321,8 +322,8 @@ android {
```
Querying the available versions of each plugin
----------------------------------------------
## Querying the available versions of each plugin {#querying-the-available-versions-of-each-plugin}
repo.json provides all the options in one file now.
A shell script in the `pkgs/development/mobile/androidenv/` subdirectory can be used to retrieve all
@ -334,8 +335,8 @@ possible options:
The above command-line instruction queries all package versions in repo.json.
Updating the generated expressions
----------------------------------
## Updating the generated expressions {#updating-the-generated-expressions}
repo.json is generated from XML files that the Android Studio package manager uses.
To update the expressions run the `generate.sh` script that is stored in the
`pkgs/development/mobile/androidenv/` subdirectory:

@ -4,6 +4,12 @@
In this document and related Nix expressions, we use the term, _BEAM_, to describe the environment. BEAM is the name of the Erlang Virtual Machine and, as far as we're concerned, from a packaging perspective, all languages that run on the BEAM are interchangeable. That which varies, like the build system, is transparent to users of any given BEAM package, so we make no distinction.
## Available versions and deprecations schedule {#available-versions-and-deprecations-schedule}
### Elixir {#elixir}
nixpkgs follows the [official elixir deprecation schedule](https://hexdocs.pm/elixir/compatibility-and-deprecations.html) and keeps the last 5 released versions of Elixir available.
## Structure {#beam-structure}
All BEAM-related expressions are available via the top-level `beam` attribute, which includes:
@ -62,74 +68,128 @@ Erlang.mk functions similarly to Rebar3, except we use `buildErlangMk` instead o
`mixRelease` is used to make a release in the mix sense. Dependencies will need to be fetched with `fetchMixDeps` and passed to it.
#### mixRelease - Elixir Phoenix example
#### mixRelease - Elixir Phoenix example {#mix-release-elixir-phoenix-example}
there are 3 steps, frontend dependencies (javascript), backend dependencies (elixir) and the final derivation that puts both of those together
##### mixRelease - Frontend dependencies (javascript) {#mix-release-javascript-deps}
For phoenix projects, inside of nixpkgs you can either use yarn2nix (mkYarnModule) or node2nix. An example with yarn2nix can be found [here](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix#L39). An example with node2nix will follow. To package something outside of nixpkgs, you have alternatives like [npmlock2nix](https://github.com/nix-community/npmlock2nix) or [nix-npm-buildpackage](https://github.com/serokell/nix-npm-buildpackage)
##### mixRelease - backend dependencies (mix) {#mix-release-mix-deps}
There are 2 ways to package backend dependencies. With mix2nix and with a fixed-output-derivation (FOD).
###### mix2nix {#mix2nix}
Here is how your `default.nix` file would look.
`mix2nix` is a cli tool available in nixpkgs. it will generate a nix expression from a mix.lock file. It is quite standard in the 2nix tool series.
Note that currently mix2nix can't handle git dependencies inside the mix.lock file. If you have git dependencies, you can either add them manually (see [example](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/pleroma/default.nix#L20)) or use the FOD method.
The advantage of using mix2nix is that nix will know your whole dependency graph. On a dependency update, this won't trigger a full rebuild and download of all the dependencies, where FOD will do so.
Practical steps:
- run `mix2nix > mix_deps.nix` in the upstream repo.
- pass `mixNixDeps = with pkgs; import ./mix_deps.nix { inherit lib beamPackages; };` as an argument to mixRelease.
If there are git depencencies.
- You'll need to fix the version artificially in mix.exs and regenerate the mix.lock with fixed version (on upstream). This will enable you to run `mix2nix > mix_deps.nix`.
- From the mix_deps.nix file, remove the dependencies that had git versions and pass them as an override to the import function.
```nix
with import <nixpkgs> { };
mixNixDeps = import ./mix.nix {
inherit beamPackages lib;
overrides = (final: prev: {
# mix2nix does not support git dependencies yet,
# so we need to add them manually
prometheus_ex = beamPackages.buildMix rec {
name = "prometheus_ex";
version = "3.0.5";
# Change the argument src with the git src that you actually need
src = fetchFromGitLab {
domain = "git.pleroma.social";
group = "pleroma";
owner = "elixir-libraries";
repo = "prometheus.ex";
rev = "a4e9beb3c1c479d14b352fd9d6dd7b1f6d7deee5";
sha256 = "1v0q4bi7sb253i8q016l7gwlv5562wk5zy3l2sa446csvsacnpjk";
};
# you can re-use the same beamDeps argument as generated
beamDeps = with final; [ prometheus ];
};
});
};
```
let
packages = beam.packagesWith beam.interpreters.erlang;
src = builtins.fetchgit {
url = "ssh://git@github.com/your_id/your_repo";
rev = "replace_with_your_commit";
};
You will need to run the build process once to fix the sha256 to correspond to your new git src.
pname = "your_project";
version = "0.0.1";
mixEnv = "prod";
###### FOD {#fixed-output-derivation}
mixDeps = packages.fetchMixDeps {
A fixed output derivation will download mix dependencies from the internet. To ensure reproducibility, a hash will be supplied. Note that mix is relatively reproducible. An FOD generating a different hash on each run hasn't been observed (as opposed to npm where the chances are relatively high). See [elixir_ls](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/beam-modules/elixir_ls.nix) for a usage example of FOD.
Practical steps
- start with the following argument to mixRelease
```nix
mixFodDeps = fetchMixDeps {
pname = "mix-deps-${pname}";
inherit src mixEnv version;
# nix will complain and tell you the right value to replace this with
inherit src version;
sha256 = lib.fakeSha256;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
};
```
nodeDependencies = (pkgs.callPackage ./assets/default.nix { }).shell.nodeDependencies;
The first build will complain about the sha256 value, you can replace with the suggested value after that.
frontEndFiles = stdenvNoCC.mkDerivation {
pname = "frontend-${pname}";
Note that if after you've replaced the value, nix suggests another sha256, then mix is not fetching the dependencies reproducibly. An FOD will not work in that case and you will have to use mix2nix.
nativeBuildInputs = [ nodejs ];
##### mixRelease - example {#mix-release-example}
inherit version src;
Here is how your `default.nix` file would look for a phoenix project.
buildPhase = ''
cp -r ./assets $TEMPDIR
```nix
with import <nixpkgs> { };
mkdir -p $TEMPDIR/assets/node_modules/.cache
cp -r ${nodeDependencies}/lib/node_modules $TEMPDIR/assets
export PATH="${nodeDependencies}/bin:$PATH"
let
# beam.interpreters.erlangR23 is available if you need a particular version
packages = beam.packagesWith beam.interpreters.erlang;
cd $TEMPDIR/assets
webpack --config ./webpack.config.js
cd ..
'';
pname = "your_project";
version = "0.0.1";
installPhase = ''
cp -r ./priv/static $out/
'';
src = builtins.fetchgit {
url = "ssh://git@github.com/your_id/your_repo";
rev = "replace_with_your_commit";
};
outputHashAlgo = "sha256";
outputHashMode = "recursive";
# if using mix2nix you can use the mixNixDeps attribute
mixFodDeps = packages.fetchMixDeps {
pname = "mix-deps-${pname}";
inherit src version;
# nix will complain and tell you the right value to replace this with
outputHash = lib.fakeSha256;
impureEnvVars = lib.fetchers.proxyImpureEnvVars;
sha256 = lib.fakeSha256;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
};
nodeDependencies = (pkgs.callPackage ./assets/default.nix { }).shell.nodeDependencies;
in packages.mixRelease {
inherit src pname version mixEnv mixDeps;
inherit src pname version mixFodDeps;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
preInstall = ''
mkdir -p ./priv/static
cp -r ${frontEndFiles} ./priv/static
postBuild = ''
ln -sf ${nodeDependencies}/lib/node_modules assets/node_modules
npm run deploy --prefix ./assets
# for external task you need a workaround for the no deps check flag
# https://github.com/phoenixframework/phoenix/issues/2690
mix do deps.loadpaths --no-deps-check, phx.digest
mix phx.digest --no-deps-check
'';
}
```
@ -142,7 +202,7 @@ Setup will require the following steps:
- you can now `nix-build .`
- To run the release, set the `RELEASE_TMP` environment variable to a directory that your program has write access to. It will be used to store the BEAM settings.
#### Example of creating a service for an Elixir - Phoenix project
#### Example of creating a service for an Elixir - Phoenix project {#example-of-creating-a-service-for-an-elixir---phoenix-project}
In order to create a service with your release, you could add a `service.nix`
in your project with the following
@ -159,6 +219,8 @@ in
systemd.services.${release_name} = {
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "postgresql.service" ];
# note that if you are connecting to a postgres instance on a different host
# postgresql.service should not be included in the requires.
requires = [ "network-online.target" "postgresql.service" ];
description = "my app";
environment = {
@ -195,6 +257,7 @@ in
path = [ pkgs.bash ];
};
# in case you have migration scripts or you want to use a remote shell
environment.systemPackages = [ release ];
}
```
@ -206,23 +269,42 @@ in
Usually, we need to create a `shell.nix` file and do our development inside of the environment specified therein. Just install your version of Erlang and any other interpreters, and then use your normal build tools. As an example with Elixir:
```nix
{ pkgs ? import "<nixpkgs"> {} }:
{ pkgs ? import <nixpkgs> {} }:
with pkgs;
let
elixir = beam.packages.erlangR22.elixir_1_9;
elixir = beam.packages.erlangR24.elixir_1_12;
in
mkShell {
buildInputs = [ elixir ];
}
```
### Using an overlay
ERL_INCLUDE_PATH="${erlang}/lib/erlang/usr/include";
If you need to use an overlay to change some attributes of a derivation, e.g. if you need a bugfix from a version that is not yet available in nixpkgs, you can override attributes such as `version` (and the corresponding `sha256`) and then use this overlay in your development environment:
#### `shell.nix`
```nix
let
elixir_1_13_1_overlay = (self: super: {
elixir_1_13 = super.elixir_1_13.override {
version = "1.13.1";
sha256 = "0z0b1w2vvw4vsnb99779c2jgn9bgslg7b1pmd9vlbv02nza9qj5p";
};
});
pkgs = import <nixpkgs> { overlays = [ elixir_1_13_1_overlay ]; };
in
with pkgs;
mkShell {
buildInputs = [
elixir_1_13
];
}
```
#### Elixir - Phoenix project
#### Elixir - Phoenix project {#elixir---phoenix-project}
Here is an example `shell.nix`.
@ -233,10 +315,10 @@ let
# define packages to install
basePackages = [
git
# replace with beam.packages.erlang.elixir_1_11 if you need
# replace with beam.packages.erlang.elixir_1_13 if you need
beam.packages.erlang.elixir
nodejs
postgresql_13
postgresql_14
# only used for frontend dependencies
# you are free to use yarn2nix as well
nodePackages.node2nix
@ -254,10 +336,12 @@ let
mkdir -p .nix-mix .nix-hex
export MIX_HOME=$PWD/.nix-mix
export HEX_HOME=$PWD/.nix-mix
# make hex from Nixpkgs available
# `mix local.hex` will install hex into MIX_HOME and should take precedence
export MIX_PATH="${beam.packages.erlang.hex}/lib/erlang/lib/hex/ebin"
export PATH=$MIX_HOME/bin:$HEX_HOME/bin:$PATH
# TODO: not sure how to make hex available without installing it afterwards.
mix local.hex --if-missing
export LANG=en_US.UTF-8
export LANG=C.UTF-8
# keep your shell history in iex
export ERL_AFLAGS="-kernel shell_history enabled"
# postges related

@ -149,7 +149,7 @@ A few notes about [Full example — `default.nix`](#ex-buildBowerComponentsDefau
## Troubleshooting {#ssec-bower2nix-troubleshooting}
### ENOCACHE errors from buildBowerComponents
### ENOCACHE errors from buildBowerComponents {#enocache-errors-from-buildbowercomponents}
This means that Bower was looking for a package version which doesn't exist in the generated `bower-packages.nix`.

@ -0,0 +1,49 @@
# CHICKEN {#sec-chicken}
[CHICKEN](https://call-cc.org/) is a
[R⁵RS](https://schemers.org/Documents/Standards/R5RS/HTML/)-compliant Scheme
compiler. It includes an interactive mode and a custom package format, "eggs".
## Using Eggs
Eggs described in nixpkgs are available inside the
`chickenPackages.chickenEggs` attrset. Including an egg as a build input is
done in the typical Nix fashion. For example, to include support for [SRFI
189](https://srfi.schemers.org/srfi-189/srfi-189.html) in a derivation, one
might write:
```nix
buildInputs = [
chicken
chickenPackages.chickenEggs.srfi-189
];
```
Both `chicken` and its eggs have a setup hook which configures the environment
variables `CHICKEN_INCLUDE_PATH` and `CHICKEN_REPOSITORY_PATH`.
## Updating Eggs
nixpkgs only knows about a subset of all published eggs. It uses
[egg2nix](https://github.com/the-kenny/egg2nix) to generate a
package set from a list of eggs to include.
The package set is regenerated by running the following shell commands:
```
$ nix-shell -p chickenPackages.egg2nix
$ cd pkgs/development/compilers/chicken/5/
$ egg2nix eggs.scm > eggs.nix
```
## Adding Eggs
When we run `egg2nix`, we obtain one collection of eggs with
mutually-compatible versions. This means that when we add new eggs, we may
need to update existing eggs. To keep those separate, follow the procedure for
updating eggs before including more eggs.
To include more eggs, edit `pkgs/development/compilers/chicken/5/eggs.scm`.
The first section of this file lists eggs which are required by `egg2nix`
itself; all other eggs go into the second section. After editing, follow the
procedure for updating eggs.

@ -1,6 +1,6 @@
# Coq and coq packages {#sec-language-coq}
## Coq derivation: `coq`
## Coq derivation: `coq` {#coq-derivation-coq}
The Coq derivation is overridable through the `coq.override overrides`, where overrides is an attribute set which contains the arguments to override. We recommend overriding either of the following
@ -8,7 +8,7 @@ The Coq derivation is overridable through the `coq.override overrides`, where ov
* `customOCamlPackage` (optional, defaults to `null`, which lets Coq choose a version automatically), which can be set to any of the ocaml packages attribute of `ocaml-ng` (such as `ocaml-ng.ocamlPackages_4_10` which is the default for Coq 8.11 for example).
* `coq-version` (optional, defaults to the short version e.g. "8.10"), is a version number of the form "x.y" that indicates which Coq's version build behavior to mimic when using a source which is not a release. E.g. `coq.override { version = "d370a9d1328a4e1cdb9d02ee032f605a9d94ec7a"; coq-version = "8.10"; }`.
## Coq packages attribute sets: `coqPackages`
## Coq packages attribute sets: `coqPackages` {#coq-packages-attribute-sets-coqpackages}
The recommended way of defining a derivation for a Coq library, is to use the `coqPackages.mkCoqDerivation` function, which is essentially a specialization of `mkDerivation` taking into account most of the specifics of Coq libraries. The following attributes are supported:
@ -28,11 +28,13 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
* `domain` (optional, defaults to `"github.com"`), domains including the strings `"github"` or `"gitlab"` in their names are automatically supported, otherwise, one must change the `fetcher` argument to support them (cf `pkgs/development/coq-modules/heq/default.nix` for an example),
* `releaseRev` (optional, defaults to `(v: v)`), provides a default mapping from release names to revision hashes/branch names/tags,
* `displayVersion` (optional), provides a way to alter the computation of `name` from `pname`, by explaining how to display version numbers,
* `namePrefix` (optional), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
* `extraBuildInputs` (optional), by default `buildInputs` just contains `coq`, this allows to add more build inputs,
* `namePrefix` (optional, defaults to `[ "coq" ]`), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
* `extraNativeBuildInputs` (optional), by default `nativeBuildInputs` just contains `coq`, this allows to add more native build inputs, `nativeBuildInputs` are executables and `buildInputs` are libraries and dependencies,
* `extraBuildInputs` (optional), this allows to add more build inputs,
* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against.
* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`,
* `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
* `opam-name` (optional, defaults to concatenating with a dash separator the components of `namePrefix` and `pname`), name of the Dune package to build.
* `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
* `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,

@ -1,20 +1,22 @@
# Crystal {#crystal}
## Building a Crystal package
## Building a Crystal package {#building-a-crystal-package}
This section uses [Mint](https://github.com/mint-lang/mint) as an example for how to build a Crystal package.
If the Crystal project has any dependencies, the first step is to get a `shards.nix` file encoding those. Get a copy of the project and go to its root directory such that its `shard.lock` file is in the current directory, then run `crystal2nix` in it
If the Crystal project has any dependencies, the first step is to get a `shards.nix` file encoding those. Get a copy of the project and go to its root directory such that its `shard.lock` file is in the current directory. Executable projects should usually commit the `shard.lock` file, but sometimes that's not the case, which means you need to generate it yourself. With an existing `shard.lock` file, `crystal2nix` can be run.
```bash
$ git clone https://github.com/mint-lang/mint
$ cd mint
$ git checkout 0.5.0
$ if [ ! -f shard.lock ]; then nix-shell -p shards --run "shards lock"; fi
$ nix-shell -p crystal2nix --run crystal2nix
```
This should have generated a `shards.nix` file.
Next create a Nix file for your derivation and use `pkgs.crystal.buildCrystalPackage` as follows:
```nix
with import <nixpkgs> {};
crystal.buildCrystalPackage rec {

@ -0,0 +1,34 @@
# CUDA {#cuda}
CUDA-only packages are stored in the `cudaPackages` packages set. This set
includes the `cudatoolkit`, portions of the toolkit in separate derivations,
`cudnn`, `cutensor` and `nccl`.
A package set is available for each CUDA version, so for example
`cudaPackages_11_6`. Within each set is a matching version of the above listed
packages. Additionally, other versions of the packages that are packaged and
compatible are available as well. For example, there can be a
`cudaPackages.cudnn_8_3_2` package.
To use one or more CUDA packages in an expression, give the expression a `cudaPackages` parameter, and in case CUDA is optional
```nix
cudaSupport ? false
cudaPackages ? {}
```
When using `callPackage`, you can choose to pass in a different variant, e.g.
when a different version of the toolkit suffices
```nix
mypkg = callPackage { cudaPackages = cudaPackages_11_5; }
```
If another version of say `cudnn` or `cutensor` is needed, you can override the
package set to make it the default. This guarantees you get a consistent package
set.
```nix
mypkg = let
cudaPackages = cudaPackages_11_5.overrideScope' (final: prev {
cudnn = prev.cudnn_8_3_2;
}});
in callPackage { inherit cudaPackages; };
```

@ -50,7 +50,7 @@ expression does not protect the Prelude import with a semantic integrity
check, so the first step is to freeze the expression using `dhall freeze`,
like this:
```bash
```ShellSession
$ dhall freeze --inplace ./true.dhall
```
@ -113,7 +113,7 @@ in
… which we can then build using this command:
```bash
```ShellSession
$ nix build --file ./example.nix dhallPackages.true
```
@ -121,7 +121,7 @@ $ nix build --file ./example.nix dhallPackages.true
The above package produces the following directory tree:
```bash
```ShellSession
$ tree -a ./result
result
├── .cache
@ -135,7 +135,7 @@ result
* `source.dhall` contains the result of interpreting our Dhall package:
```bash
```ShellSession
$ cat ./result/source.dhall
True
```
@ -143,7 +143,7 @@ result
* The `.cache` subdirectory contains one binary cache product encoding the
same result as `source.dhall`:
```bash
```ShellSession
$ dhall decode < ./result/.cache/dhall/122027abdeddfe8503496adeb623466caa47da5f63abd2bc6fa19f6cfcb73ecfed70
True
```
@ -151,7 +151,7 @@ result
* `binary.dhall` contains a Dhall expression which handles fetching and decoding
the same cache product:
```bash
```ShellSession
$ cat ./result/binary.dhall
missing sha256:27abdeddfe8503496adeb623466caa47da5f63abd2bc6fa19f6cfcb73ecfed70
$ cp -r ./result/.cache .cache
@ -168,7 +168,7 @@ to conserve disk space when they are used exclusively as dependencies. For
example, if we build the Prelude package it will only contain the binary
encoding of the expression:
```bash
```ShellSession
$ nix build --file ./example.nix dhallPackages.Prelude
$ tree -a result
@ -199,7 +199,7 @@ Dhall overlay like this:
… and now the Prelude will contain the fully decoded result of interpreting
the Prelude:
```bash
```ShellSession
$ nix build --file ./example.nix dhallPackages.Prelude
$ tree -a result
@ -302,7 +302,7 @@ Additionally, `buildDhallGitHubPackage` accepts the same arguments as
You can use the `dhall-to-nixpkgs` command-line utility to automate
packaging Dhall code. For example:
```bash
```ShellSession
$ nix-env --install --attr haskellPackages.dhall-nixpkgs
$ nix-env --install --attr nix-prefetch-git # Used by dhall-to-nixpkgs
@ -329,12 +329,12 @@ The utility takes care of automatically detecting remote imports and converting
them to package dependencies. You can also use the utility on local
Dhall directories, too:
```bash
```ShellSession
$ dhall-to-nixpkgs directory ~/proj/dhall-semver
{ buildDhallDirectoryPackage, Prelude }:
buildDhallDirectoryPackage {
name = "proj";
src = /Users/gabriel/proj/dhall-semver;
src = ~/proj/dhall-semver;
file = "package.dhall";
source = false;
document = false;
@ -342,6 +342,37 @@ $ dhall-to-nixpkgs directory ~/proj/dhall-semver
}
```
### Remote imports as fixed-output derivations {#ssec-dhall-remote-imports-as-fod}
`dhall-to-nixpkgs` has the ability to fetch and build remote imports as
fixed-output derivations by using their Dhall integrity check. This is
sometimes easier than manually packaging all remote imports.
This can be used like the following:
```ShellSession
$ dhall-to-nixpkgs directory --fixed-output-derivations ~/proj/dhall-semver
{ buildDhallDirectoryPackage, buildDhallUrl }:
buildDhallDirectoryPackage {
name = "proj";
src = ~/proj/dhall-semver;
file = "package.dhall";
source = false;
document = false;
dependencies = [
(buildDhallUrl {
url = "https://prelude.dhall-lang.org/v17.0.0/package.dhall";
hash = "sha256-ENs8kZwl6QRoM9+Jeo/+JwHcOQ+giT2VjDQwUkvlpD4=";
dhallHash = "sha256:10db3c919c25e9046833df897a8ffe2701dc390fa0893d958c3430524be5a43e";
})
];
}
```
Here, `dhall-semver`'s `Prelude` dependency is fetched and built with the
`buildDhallUrl` helper function, instead of being passed in as a function
argument.
## Overriding dependency versions {#ssec-dhall-overriding-dependency-versions}
Suppose that we change our `true.dhall` example expression to depend on an older
@ -359,7 +390,7 @@ in Prelude.Bool.not False
If we try to rebuild that expression the build will fail:
```
```ShellSession
$ nix build --file ./example.nix dhallPackages.true
builder for '/nix/store/0f1hla7ff1wiaqyk1r2ky4wnhnw114fi-true.drv' failed with exit code 1; last 10 log lines:
@ -385,7 +416,7 @@ importing the URL.
However, we can override the default Prelude version by using `dhall-to-nixpkgs`
to create a Dhall package for our desired Prelude:
```bash
```ShellSession
$ dhall-to-nixpkgs github https://github.com/dhall-lang/dhall-lang.git \
--name Prelude \
--directory Prelude \
@ -396,7 +427,7 @@ $ dhall-to-nixpkgs github https://github.com/dhall-lang/dhall-lang.git \
… and then referencing that package in our Dhall overlay, by either overriding
the Prelude globally for all packages, like this:
```bash
```nix
dhallOverrides = self: super: {
true = self.callPackage ./true.nix { };
@ -407,7 +438,7 @@ the Prelude globally for all packages, like this:
… or selectively overriding the Prelude dependency for just the `true` package,
like this:
```bash
```nix
dhallOverrides = self: super: {
true = self.callPackage ./true.nix {
Prelude = self.callPackage ./Prelude.nix { };

@ -1,6 +1,6 @@
# Dotnet
# Dotnet {#dotnet}
## Local Development Workflow
## Local Development Workflow {#local-development-workflow}
For local development, it's recommended to use nix-shell to create a dotnet environment:
@ -16,7 +16,7 @@ mkShell {
}
```
### Using many sdks in a workflow
### Using many sdks in a workflow {#using-many-sdks-in-a-workflow}
It's very likely that more than one sdk will be needed on a given project. Dotnet provides several different frameworks (E.g dotnetcore, aspnetcore, etc.) as well as many versions for a given framework. Normally, dotnet is able to fetch a framework and install it relative to the executable. However, this would mean writing to the nix store in nixpkgs, which is read-only. To support the many-sdk use case, one can compose an environment using `dotnetCorePackages.combinePackages`:
@ -28,8 +28,7 @@ mkShell {
packages = [
(with dotnetCorePackages; combinePackages [
sdk_3_1
sdk_3_0
sdk_2_1
sdk_5_0
])
];
}
@ -37,7 +36,7 @@ mkShell {
This will produce a dotnet installation that has the dotnet 3.1, 3.0, and 2.1 sdk. The first sdk listed will have it's cli utility present in the resulting environment. Example info output:
```ShellSesssion
```ShellSession
$ dotnet --info
.NET Core SDK (reflecting any global.json):
Version: 3.1.101
@ -60,16 +59,74 @@ $ dotnet --info
Microsoft.NETCore.App 3.1.1 [/nix/store/iiv98i2jdi226dgh4jzkkj2ww7f8jgpd-dotnet-core-combined/shared/Microsoft.NETCore.App]
```
## dotnet-sdk vs dotnetCorePackages.sdk
## dotnet-sdk vs dotnetCorePackages.sdk {#dotnet-sdk-vs-dotnetcorepackages.sdk}
The `dotnetCorePackages.sdk_X_Y` is preferred over the old dotnet-sdk as both major and minor version are very important for a dotnet environment. If a given minor version isn't present (or was changed), then this will likely break your ability to build a project.
## dotnetCorePackages.sdk vs dotnetCorePackages.net vs dotnetCorePackages.netcore vs dotnetCorePackages.aspnetcore
## dotnetCorePackages.sdk vs dotnetCorePackages.runtime vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.runtime-vs-dotnetcorepackages.aspnetcore}
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `runtime` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications.
## Packaging a Dotnet Application {#packaging-a-dotnet-application}
To package Dotnet applications, you can use `buildDotnetModule`. This has similar arguments to `stdenv.mkDerivation`, with the following additions:
* `projectFile` has to be used for specifying the dotnet project file relative to the source root. These usually have `.sln` or `.csproj` file extensions. This can be an array of multiple projects as well.
* `nugetDeps` has to be used to specify the NuGet dependency file. Unfortunately, these cannot be deterministically fetched without a lockfile. A script to fetch these is available as `passthru.fetch-deps`. This file can also be generated manually using `nuget-to-nix` tool, which is available in nixpkgs.
* `packNupkg` is used to pack project as a `nupkg`, and installs it to `$out/share`. If set to `true`, the derivation can be used as a dependency for another dotnet project by adding it to `projectReferences`.
* `projectReferences` can be used to resolve `ProjectReference` project items. Referenced projects can be packed with `buildDotnetModule` by setting the `packNupkg = true` attribute and passing a list of derivations to `projectReferences`. Since we are sharing referenced projects as NuGets they must be added to csproj/fsproj files as `PackageReference` as well.
For example, your project has a local dependency:
```xml
<ProjectReference Include="../foo/bar.fsproj" />
```
To enable discovery through `projectReferences` you would need to add:
```xml
<ProjectReference Include="../foo/bar.fsproj" />
<PackageReference Include="bar" Version="*" Condition=" '$(ContinuousIntegrationBuild)'=='true' "/>
```
* `executables` is used to specify which executables get wrapped to `$out/bin`, relative to `$out/lib/$pname`. If this is unset, all executables generated will get installed. If you do not want to install any, set this to `[]`. This gets done in the `preFixup` phase.
* `runtimeDeps` is used to wrap libraries into `LD_LIBRARY_PATH`. This is how dotnet usually handles runtime dependencies.
* `buildType` is used to change the type of build. Possible values are `Release`, `Debug`, etc. By default, this is set to `Release`.
* `dotnet-sdk` is useful in cases where you need to change what dotnet SDK is being used.
* `dotnet-runtime` is useful in cases where you need to change what dotnet runtime is being used. This can be either a regular dotnet runtime, or an aspnetcore.
* `dotnet-test-sdk` is useful in cases where unit tests expect a different dotnet SDK. By default, this is set to the `dotnet-sdk` attribute.
* `testProjectFile` is useful in cases where the regular project file does not contain the unit tests. It gets restored and build, but not installed. You may need to regenerate your nuget lockfile after setting this.
* `disabledTests` is used to disable running specific unit tests. This gets passed as: `dotnet test --filter "FullyQualifiedName!={}"`, to ensure compatibility with all unit test frameworks.
* `dotnetRestoreFlags` can be used to pass flags to `dotnet restore`.
* `dotnetBuildFlags` can be used to pass flags to `dotnet build`.
* `dotnetTestFlags` can be used to pass flags to `dotnet test`. Used only if `doCheck` is set to `true`.
* `dotnetInstallFlags` can be used to pass flags to `dotnet install`.
* `dotnetPackFlags` can be used to pass flags to `dotnet pack`. Used only if `packNupkg` is set to `true`.
* `dotnetFlags` can be used to pass flags to all of the above phases.
When packaging a new application, you need to fetch it's dependencies. You can set `nugetDeps` to an empty string to make the derivation temporarily evaluate, and then run `nix-build -A package.passthru.fetch-deps` to generate it's dependency fetching script. After running the script, you should have the location of the generated lockfile printed to the console. This can be copied to a stable directory. Note that if either `projectFile` or `nugetDeps` are unset, this script cannot be generated!
Here is an example `default.nix`, using some of the previously discussed arguments:
```nix
{ lib, buildDotnetModule, dotnetCorePackages, ffmpeg }:
let
referencedProject = import ../../bar { ... };
in buildDotnetModule rec {
pname = "someDotnetApplication";
version = "0.1";
src = ./.;
projectFile = "src/project.sln";
nugetDeps = ./deps.nix; # File generated with `nix-build -A package.passthru.fetch-deps`.
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `net`, `netcore` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications. For runtime versions >= .NET 5 `net` is used while `netcore` is used for older .NET Core runtime version.
projectReferences = [ referencedProject ]; # `referencedProject` must contain `nupkg` in the folder structure.
## Packaging a Dotnet Application
dotnet-sdk = dotnetCorePackages.sdk_3_1;
dotnet-runtime = dotnetCorePackages.net_5_0;
dotnetFlags = [ "--runtime linux-x64" ];
Ideally, we would like to build against the sdk, then only have the dotnet runtime available in the runtime closure.
executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
executables = []; # Don't install any executables.
TODO: Create closure-friendly way to package dotnet applications
packNupkg = true; # This packs the project as "foo-0.1.nupkg" at `$out/share`.
runtimeDeps = [ ffmpeg ]; # This will wrap ffmpeg's library path into `LD_LIBRARY_PATH`.
}
```

@ -15,28 +15,26 @@ Modes of use of `emscripten`:
If you want to work with `emcc`, `emconfigure` and `emmake` as you are used to from Ubuntu and similar distributions you can use these commands:
* `nix-env -i emscripten`
* `nix-env -f "<nixpkgs>" -iA emscripten`
* `nix-shell -p emscripten`
* **Declarative usage**:
This mode is far more power full since this makes use of `nix` for dependency management of emscripten libraries and targets by using the `mkDerivation` which is implemented by `pkgs.emscriptenStdenv` and `pkgs.buildEmscriptenPackage`. The source for the packages is in `pkgs/top-level/emscripten-packages.nix` and the abstraction behind it in `pkgs/development/em-modules/generic/default.nix`.
This mode is far more power full since this makes use of `nix` for dependency management of emscripten libraries and targets by using the `mkDerivation` which is implemented by `pkgs.emscriptenStdenv` and `pkgs.buildEmscriptenPackage`. The source for the packages is in `pkgs/top-level/emscripten-packages.nix` and the abstraction behind it in `pkgs/development/em-modules/generic/default.nix`. From the root of the nixpkgs repository:
* build and install all packages:
* `nix-env -iA emscriptenPackages`
* dev-shell for zlib implementation hacking:
* `nix-shell -A emscriptenPackages.zlib`
## Imperative usage
## Imperative usage {#imperative-usage}
A few things to note:
* `export EMCC_DEBUG=2` is nice for debugging
* `~/.emscripten`, the build artifact cache sometimes creates issues and needs to be removed from time to time
## Declarative usage
## Declarative usage {#declarative-usage}
Let's see two different examples from `pkgs/top-level/emscripten-packages.nix`:
@ -50,7 +48,7 @@ A special requirement of the `pkgs.buildEmscriptenPackage` is the `doCheck = tru
* Use `export EMCC_DEBUG=2` from within a emscriptenPackage's `phase` to get more detailed debug output what is going wrong.
* ~/.emscripten cache is requiring us to set `HOME=$TMPDIR` in individual phases. This makes compilation slower but also makes it more deterministic.
### Usage 1: pkgs.zlib.override
### Usage 1: pkgs.zlib.override {#usage-1-pkgs.zlib.override}
This example uses `zlib` from nixpkgs but instead of compiling **C** to **ELF** it compiles **C** to **JS** since we were using `pkgs.zlib.override` and changed stdenv to `pkgs.emscriptenStdenv`. A few adaptions and hacks were set in place to make it working. One advantage is that when `pkgs.zlib` is updated, it will automatically update this package as well. However, this can also be the downside...
@ -110,7 +108,7 @@ See the `zlib` example:
'';
});
### Usage 2: pkgs.buildEmscriptenPackage
### Usage 2: pkgs.buildEmscriptenPackage {#usage-2-pkgs.buildemscriptenpackage}
This `xmlmirror` example features a emscriptenPackage which is defined completely from this context and no `pkgs.zlib.override` is used.
@ -165,7 +163,7 @@ This `xmlmirror` example features a emscriptenPackage which is defined completel
'';
};
### Declarative debugging
### Declarative debugging {#declarative-debugging}
Use `nix-shell -I nixpkgs=/some/dir/nixpkgs -A emscriptenPackages.libz` and from there you can go trough the individual steps. This makes it easy to build a good `unit test` or list the files of the project.
@ -177,7 +175,7 @@ Use `nix-shell -I nixpkgs=/some/dir/nixpkgs -A emscriptenPackages.libz` and from
6. `buildPhase`
7. ... happy hacking...
## Summary
## Summary {#summary}
Using this toolchain makes it easy to leverage `nix` from NixOS, MacOSX or even Windows (WSL+ubuntu+nix). This toolchain is reproducible, behaves like the rest of the packages from nixpkgs and contains a set of well working examples to learn and adapt from.

@ -42,7 +42,21 @@ Unlike other libraries mentioned in this section, GdkPixbuf only supports a sing
### Icons {#ssec-gnome-icons}
When an application uses icons, an icon theme should be available in `XDG_DATA_DIRS` during runtime. The package for the default, icon-less [hicolor-icon-theme](https://www.freedesktop.org/wiki/Software/icon-theme/) (should be propagated by every icon theme) contains [a setup hook](#ssec-gnome-hooks-hicolor-icon-theme) that will pick up icon themes from `buildInputs` and pass it to our wrapper. Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
When an application uses icons, an icon theme should be available in `XDG_DATA_DIRS` during runtime. The package for the default, icon-less [hicolor-icon-theme](https://www.freedesktop.org/wiki/Software/icon-theme/) (should be propagated by every icon theme) contains [a setup hook](#ssec-gnome-hooks-hicolor-icon-theme) that will pick up icon themes from `buildInputs` and add their datadirs to `XDG_ICON_DIRS` environment variable (this is Nixpkgs specific, not actually a XDG standard variable). Unfortunately, relying on that would mean every user has to download the theme included in the package expression no matter their preference. For that reason, we leave the installation of icon theme on the user. If you use one of the desktop environments, you probably already have an icon theme installed.
In the rare case you need to use icons from dependencies (e.g. when an app forces an icon theme), you can use the following to pick them up:
```nix
buildInputs = [
pantheon.elementary-icon-theme
];
preFixup = ''
gappsWrapperArgs+=(
# The icon theme is hardcoded.
--prefix XDG_DATA_DIRS : "$XDG_ICON_DIRS"
)
'';
```
To avoid costly file system access when locating icons, GTK, [as well as Qt](https://woboq.com/blog/qicon-reads-gtk-icon-cache-in-qt57.html), can rely on `icon-theme.cache` files from the themes' top-level directories. These files are generated using `gtk-update-icon-cache`, which is expected to be run whenever an icon is added or removed to an icon theme (typically an application icon into `hicolor` theme) and some programs do indeed run this after icon installation. However, since packages are installed into their own prefix by Nix, this would lead to conflicts. For that reason, `gtk3` provides a [setup hook](#ssec-gnome-hooks-gtk-drop-icon-theme-cache) that will clean the file from installation. Since most applications only ship their own icon that will be loaded on start-up, it should not affect them too much. On the other hand, icon themes are much larger and more widely used so we need to cache them. Because we recommend installing icon themes globally, we will generate the cache files from all packages in a profile using a NixOS module. You can enable the cache generation using `gtk.iconCache.enable` option if your desktop environment does not already do that.
@ -92,17 +106,17 @@ For convenience, it also adds `dconf.lib` for a GIO module implementing a GSetti
- []{#ssec-gnome-hooks-glib} `glib` setup hook will populate `GSETTINGS_SCHEMAS_PATH` and then `wrapGAppsHook` will prepend it to `XDG_DATA_DIRS`.
- []{#ssec-gnome-hooks-gdk-pixbuf} `gdk-pixbuf` setup hook will populate `GDK_PIXBUF_MODULE_FILE` with the path to biggest `loaders.cache` file from the dependencies containing [GdkPixbuf loaders](ssec-gnome-gdk-pixbuf-loaders). This works fine when there are only two packages containing loaders (`gdk-pixbuf` and e.g. `librsvg`) – it will choose the second one, reasonably expecting that it will be bigger since it describes extra loader in addition to the default ones. But when there are more than two loader packages, this logic will break. One possible solution would be constructing a custom cache file for each package containing a program like `services/x11/gdk-pixbuf.nix` NixOS module does. `wrapGAppsHook` copies the `GDK_PIXBUF_MODULE_FILE` environment variable into the produced wrapper.
- []{#ssec-gnome-hooks-gdk-pixbuf} `gdk-pixbuf` setup hook will populate `GDK_PIXBUF_MODULE_FILE` with the path to biggest `loaders.cache` file from the dependencies containing [GdkPixbuf loaders](#ssec-gnome-gdk-pixbuf-loaders). This works fine when there are only two packages containing loaders (`gdk-pixbuf` and e.g. `librsvg`) – it will choose the second one, reasonably expecting that it will be bigger since it describes extra loader in addition to the default ones. But when there are more than two loader packages, this logic will break. One possible solution would be constructing a custom cache file for each package containing a program like `services/x11/gdk-pixbuf.nix` NixOS module does. `wrapGAppsHook` copies the `GDK_PIXBUF_MODULE_FILE` environment variable into the produced wrapper.
- []{#ssec-gnome-hooks-gtk-drop-icon-theme-cache} One of `gtk3`’s setup hooks will remove `icon-theme.cache` files from package’s icon theme directories to avoid conflicts. Icon theme packages should prevent this with `dontDropIconThemeCache = true;`.
- []{#ssec-gnome-hooks-dconf} `dconf.lib` is a dependency of `wrapGAppsHook`, which then also adds it to the `GIO_EXTRA_MODULES` variable.
- []{#ssec-gnome-hooks-hicolor-icon-theme} `hicolor-icon-theme`’s setup hook will add icon themes to `XDG_ICON_DIRS` which is prepended to `XDG_DATA_DIRS` by `wrapGAppsHook`.
- []{#ssec-gnome-hooks-hicolor-icon-theme} `hicolor-icon-theme`’s setup hook will add icon themes to `XDG_ICON_DIRS`.
- []{#ssec-gnome-hooks-gobject-introspection} `gobject-introspection` setup hook populates `GI_TYPELIB_PATH` variable with `lib/girepository-1.0` directories of dependencies, which is then added to wrapper by `wrapGAppsHook`. It also adds `share` directories of dependencies to `XDG_DATA_DIRS`, which is intended to promote GIR files but it also [pollutes the closures](https://github.com/NixOS/nixpkgs/issues/32790) of packages using `wrapGAppsHook`.
::: warning
::: {.warning}
The setup hook [currently](https://github.com/NixOS/nixpkgs/issues/56943) does not work in expressions with `strictDeps` enabled, like Python packages. In those cases, you will need to disable it with `strictDeps = false;`.
:::

@ -12,7 +12,7 @@ The function `buildGoModule` builds Go programs managed with Go modules. It buil
In the following is an example expression using `buildGoModule`, the following arguments are of special significance to the function:
- `vendorSha256`: is the hash of the output of the intermediate fetcher derivation. `vendorSha256` can also take `null` as an input. When `null` is used as a value, rather than fetching the dependencies and vendoring them, we use the vendoring included within the source repo. If you'd like to not have to update this field on dependency changes, run `go mod vendor` in your source repo and set `vendorSha256 = null;`
- `runVend`: runs the vend command to generate the vendor directory. This is useful if your code depends on c code and go mod tidy does not include the needed sources to build.
- `proxyVendor`: Fetches (go mod download) and proxies the vendor directory. This is useful if your code depends on c code and go mod tidy does not include the needed sources to build or if any dependency has case-insensitive conflicts which will produce platform dependant `vendorSha256` checksums.
```nix
pet = buildGoModule rec {
@ -28,14 +28,11 @@ pet = buildGoModule rec {
vendorSha256 = "1879j77k96684wi554rkjxydrj8g3hpp0kvxz03sd8dmwr3lh83j";
runVend = true;
meta = with lib; {
description = "Simple command-line snippet manager, written in Go";
homepage = "https://github.com/knqyf263/pet";
license = licenses.mit;
maintainers = with maintainers; [ kalbasit ];
platforms = platforms.linux ++ platforms.darwin;
};
}
```
@ -44,7 +41,7 @@ pet = buildGoModule rec {
The function `buildGoPackage` builds legacy Go programs, not supporting Go modules.
### Example for `buildGoPackage`
### Example for `buildGoPackage` {#example-for-buildgopackage}
In the following is an example expression using buildGoPackage, the following arguments are of special significance to the function:
@ -112,23 +109,31 @@ done
Both `buildGoModule` and `buildGoPackage` can be tweaked to behave slightly differently, if the following attributes are used:
### `buildFlagsArray` and `buildFlags`: {#ex-goBuildFlags-noarray}
### `ldflags` {#var-go-ldflags}
These attributes set build flags supported by `go build`. We recommend using `buildFlagsArray`. The most common use case of these attributes is to make the resulting executable aware of its own version. For example:
Arguments to pass to the Go linker tool via the `-ldflags` argument of `go build`. The most common use case for this argument is to make the resulting executable aware of its own version. For example:
```nix
buildFlagsArray = [
# Note: single quotes are not needed.
"-ldflags=-X main.Version=${version} -X main.Commit=${version}"
ldflags = [
"-s" "-w"
"-X main.Version=${version}"
"-X main.Commit=${version}"
];
```
### `tags` {#var-go-tags}
Arguments to pass to the Go via the `-tags` argument of `go build`. For example:
```nix
buildFlagsArray = ''
-ldflags=
-X main.Version=${version}
-X main.Commit=${version}
'';
tags = [
"production"
"sqlite"
];
```
```nix
tags = [ "production" ] ++ lib.optionals withSqlite [ "sqlite" ];
```
### `deleteVendor` {#var-go-deleteVendor}
@ -137,4 +142,8 @@ Removes the pre-existing vendor directory. This should only be used if the depen
### `subPackages` {#var-go-subPackages}
Limits the builder from building child packages that have not been listed. If <varname>subPackages</varname> is not specified, all child packages will be built.
Specified as a string or list of strings. Limits the builder from building child packages that have not been listed. If `subPackages` is not specified, all child packages will be built.
### `excludedPackages` {#var-go-excludedPackages}
Specified as a string or list of strings. Causes the builder to skip building child packages that match any of the provided values. If `excludedPackages` is not specified, all child packages will be built.

@ -0,0 +1,31 @@
# Hy {#sec-language-hy}
## Installation {#ssec-hy-installation}
### Installation without packages {#installation-without-packages}
You can install `hy` via nix-env or by adding it to `configuration.nix` by reffering to it as a `hy` attribute. This kind of installation adds `hy` to your environment and it succesfully works with `python3`.
::: {.caution}
Packages that are installed with your python derivation, are not accesible by `hy` this way.
:::
### Installation with packages {#installation-with-packages}
Creating `hy` derivation with custom `python` packages is really simple and similar to the way that python does it. Attribute `hy` provides function `withPackages` that creates custom `hy` derivation with specified packages.
For example if you want to create shell with `matplotlib` and `numpy`, you can do it like so:
```ShellSession
$ nix-shell -p "hy.withPackages (ps: with ps; [ numpy matplotlib ])"
```
Or if you want to extend your `configuration.nix`:
```nix
{ # ...
environment.systemPackages = with pkgs; [
(hy.withPackages (py-packages: with py-packages; [ numpy matplotlib ]))
];
}
```

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save