Merge commit '34ad3ffe08adfca17fcb4e4a47bb5f3b113687be'

main
Katharina Fey 3 years ago
commit def41fa5ac
Signed by: kookie
GPG Key ID: 90734A9E619C8A6C
  1. 22
      infra/libkookie/nixpkgs/unstable/.github/CODEOWNERS
  2. 6
      infra/libkookie/nixpkgs/unstable/.github/labeler.yml
  3. 2
      infra/libkookie/nixpkgs/unstable/.github/workflows/basic-eval.yml
  4. 2
      infra/libkookie/nixpkgs/unstable/.github/workflows/editorconfig.yml
  5. 2
      infra/libkookie/nixpkgs/unstable/.github/workflows/manual-nixos.yml
  6. 2
      infra/libkookie/nixpkgs/unstable/.github/workflows/manual-nixpkgs.yml
  7. 2
      infra/libkookie/nixpkgs/unstable/.github/workflows/nixos-manual.yml
  8. 6
      infra/libkookie/nixpkgs/unstable/.github/workflows/periodic-merge-24h.yml
  9. 6
      infra/libkookie/nixpkgs/unstable/.github/workflows/periodic-merge-6h.yml
  10. 1
      infra/libkookie/nixpkgs/unstable/.gitignore
  11. 9
      infra/libkookie/nixpkgs/unstable/doc/Makefile
  12. 23
      infra/libkookie/nixpkgs/unstable/doc/build-aux/pandoc-filters/docbook-reader/citerefentry-to-rst-role.lua
  13. 10
      infra/libkookie/nixpkgs/unstable/doc/build-aux/pandoc-filters/docbook-writer/labelless-link-is-xref.lua
  14. 36
      infra/libkookie/nixpkgs/unstable/doc/build-aux/pandoc-filters/docbook-writer/rst-roles.lua
  15. 18
      infra/libkookie/nixpkgs/unstable/doc/build-aux/pandoc-filters/link-unix-man-references.lua
  16. 29
      infra/libkookie/nixpkgs/unstable/doc/build-aux/pandoc-filters/myst-reader/roles.lua
  17. 25
      infra/libkookie/nixpkgs/unstable/doc/build-aux/pandoc-filters/myst-writer/roles.lua
  18. 14
      infra/libkookie/nixpkgs/unstable/doc/builders/fetchers.chapter.md
  19. 18
      infra/libkookie/nixpkgs/unstable/doc/builders/packages/etc-files.section.md
  20. 1
      infra/libkookie/nixpkgs/unstable/doc/builders/packages/index.xml
  21. 6
      infra/libkookie/nixpkgs/unstable/doc/builders/packages/linux.section.md
  22. 1
      infra/libkookie/nixpkgs/unstable/doc/builders/special.xml
  23. 31
      infra/libkookie/nixpkgs/unstable/doc/builders/special/invalidateFetcherByDrvHash.section.md
  24. 15
      infra/libkookie/nixpkgs/unstable/doc/contributing/coding-conventions.chapter.md
  25. 7
      infra/libkookie/nixpkgs/unstable/doc/contributing/contributing-to-documentation.chapter.md
  26. 64
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/agda.section.md
  27. 151
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/beam.section.md
  28. 4
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/coq.section.md
  29. 45
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/dotnet.section.md
  30. 3
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/index.xml
  31. 9
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/java.section.md
  32. 2
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/javascript.section.md
  33. 91
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/nim.section.md
  34. 100
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/octave.section.md
  35. 2
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/python.section.md
  36. 20
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/r.section.md
  37. 50
      infra/libkookie/nixpkgs/unstable/doc/languages-frameworks/rust.section.md
  38. 2
      infra/libkookie/nixpkgs/unstable/doc/stdenv/multiple-output.chapter.md
  39. 4
      infra/libkookie/nixpkgs/unstable/doc/stdenv/stdenv.chapter.md
  40. 2
      infra/libkookie/nixpkgs/unstable/doc/using/overlays.chapter.md
  41. 10
      infra/libkookie/nixpkgs/unstable/flake.nix
  42. 3
      infra/libkookie/nixpkgs/unstable/lib/customisation.nix
  43. 6
      infra/libkookie/nixpkgs/unstable/lib/default.nix
  44. 31
      infra/libkookie/nixpkgs/unstable/lib/generators.nix
  45. 10
      infra/libkookie/nixpkgs/unstable/lib/licenses.nix
  46. 25
      infra/libkookie/nixpkgs/unstable/lib/modules.nix
  47. 28
      infra/libkookie/nixpkgs/unstable/lib/options.nix
  48. 4
      infra/libkookie/nixpkgs/unstable/lib/sources.nix
  49. 13
      infra/libkookie/nixpkgs/unstable/lib/strings.nix
  50. 2
      infra/libkookie/nixpkgs/unstable/lib/systems/default.nix
  51. 5
      infra/libkookie/nixpkgs/unstable/lib/systems/doubles.nix
  52. 4
      infra/libkookie/nixpkgs/unstable/lib/systems/examples.nix
  53. 1
      infra/libkookie/nixpkgs/unstable/lib/systems/parse.nix
  54. 25
      infra/libkookie/nixpkgs/unstable/lib/systems/supported.nix
  55. 4
      infra/libkookie/nixpkgs/unstable/lib/tests/maintainers.nix
  56. 24
      infra/libkookie/nixpkgs/unstable/lib/tests/misc.nix
  57. 4
      infra/libkookie/nixpkgs/unstable/lib/tests/modules.sh
  58. 12
      infra/libkookie/nixpkgs/unstable/lib/tests/modules/types-anything/functions.nix
  59. 2
      infra/libkookie/nixpkgs/unstable/lib/tests/systems.nix
  60. 21
      infra/libkookie/nixpkgs/unstable/lib/trivial.nix
  61. 6
      infra/libkookie/nixpkgs/unstable/lib/types.nix
  62. 582
      infra/libkookie/nixpkgs/unstable/maintainers/maintainer-list.nix
  63. 88
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/db-to-md.sh
  64. 97
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/doc/escape-code-markup.py
  65. 32
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/doc/replace-xrefs-by-empty-links.py
  66. 12
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/doc/unknown-code-language.lua
  67. 10
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/haskell/dependencies.nix
  68. 148
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/haskell/hydra-report.hs
  69. 7
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/haskell/maintainer-handles.nix
  70. 122
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/haskell/merge-and-open-pr.sh
  71. 1
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/haskell/upload-nixos-package-list-to-hackage.sh
  72. 175
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/luarocks-packages.csv
  73. 2
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/pluginupdate.py
  74. 165
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/rebuild-amount.sh
  75. 62
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/update-luarocks-packages
  76. 7
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/update-luarocks-shell.nix
  77. 4
      infra/libkookie/nixpkgs/unstable/maintainers/scripts/update.nix
  78. 29
      infra/libkookie/nixpkgs/unstable/maintainers/team-list.nix
  79. 2
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/boot-problems.section.md
  80. 62
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/cleaning-store.chapter.md
  81. 63
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/cleaning-store.xml
  82. 44
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/container-networking.section.md
  83. 59
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/container-networking.xml
  84. 28
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/containers.chapter.md
  85. 34
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/containers.xml
  86. 59
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/control-groups.chapter.md
  87. 65
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/control-groups.xml
  88. 48
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/declarative-containers.section.md
  89. 60
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/declarative-containers.xml
  90. 115
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/imperative-containers.section.md
  91. 123
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/imperative-containers.xml
  92. 38
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/logging.chapter.md
  93. 43
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/logging.xml
  94. 11
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/maintenance-mode.section.md
  95. 16
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/maintenance-mode.xml
  96. 21
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/network-problems.section.md
  97. 27
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/network-problems.xml
  98. 30
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/rebooting.chapter.md
  99. 35
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/rebooting.xml
  100. 38
      infra/libkookie/nixpkgs/unstable/nixos/doc/manual/administration/rollback.section.md
  101. Some files were not shown because too many files have changed in this diff Show More

@ -19,7 +19,7 @@
# Libraries
/lib @edolstra @nbp @infinisil
/lib/systems @nbp @ericson2314 @matthewbauer
/lib/systems @alyssais @nbp @ericson2314 @matthewbauer
/lib/generators.nix @edolstra @nbp @Profpatsch
/lib/cli.nix @edolstra @nbp @Profpatsch
/lib/debug.nix @edolstra @nbp @Profpatsch
@ -42,6 +42,12 @@
# Nixpkgs build-support
/pkgs/build-support/writers @lassulus @Profpatsch
# Nixpkgs documentation
/maintainers/scripts/db-to-md.sh @jtojnar @ryantm
/maintainers/scripts/doc @jtojnar @ryantm
/doc/build-aux/pandoc-filters @jtojnar
/doc/contributing/contributing-to-documentation.chapter.md @jtojnar
# NixOS Internals
/nixos/default.nix @nbp @infinisil
/nixos/lib/from-env.nix @nbp @infinisil
@ -59,6 +65,7 @@
/nixos/doc/manual/development/writing-modules.xml @nbp
/nixos/doc/manual/man-nixos-option.xml @nbp
/nixos/modules/installer/tools/nixos-option.sh @nbp
/nixos/modules/system @dasJ
# NixOS integration test driver
/nixos/lib/test-driver @tfc
@ -90,9 +97,9 @@
/pkgs/top-level/haskell-packages.nix @cdepillabout @sternenseemann @maralorn @expipiplus1
# Perl
/pkgs/development/interpreters/perl @volth @stigtsp
/pkgs/top-level/perl-packages.nix @volth @stigtsp
/pkgs/development/perl-modules @volth @stigtsp
/pkgs/development/interpreters/perl @volth @stigtsp @zakame
/pkgs/top-level/perl-packages.nix @volth @stigtsp @zakame
/pkgs/development/perl-modules @volth @stigtsp @zakame
# R
/pkgs/applications/science/math/R @jbedo @bcdarwin
@ -104,7 +111,7 @@
# Rust
/pkgs/development/compilers/rust @Mic92 @LnL7 @zowoq
/pkgs/build-support/rust @andir @danieldk @zowoq
/pkgs/build-support/rust @andir @zowoq
# Darwin-related
/pkgs/stdenv/darwin @NixOS/darwin-maintainers
@ -225,3 +232,8 @@
# Cinnamon
/pkgs/desktops/cinnamon @mkg20001
#nim
/pkgs/development/compilers/nim @ehmry
/pkgs/development/nim-packages @ehmry
/pkgs/top-level/nim-packages.nix @ehmry

@ -72,6 +72,12 @@
- nixos/**/*
- pkgs/os-specific/linux/nixos-rebuild/**/*
"6.topic: nim":
- doc/languages-frameworks/nim.section.md
- pkgs/development/compilers/nim/*
- pkgs/development/nim-packages/**/*
- pkgs/top-level/nim-packages.nix
"6.topic: ocaml":
- doc/languages-frameworks/ocaml.section.md
- pkgs/development/compilers/ocaml/**/*

@ -15,6 +15,6 @@ jobs:
# we don't limit this action to only NixOS repo since the checks are cheap and useful developer feedback
steps:
- uses: actions/checkout@v2
- uses: cachix/install-nix-action@v13
- uses: cachix/install-nix-action@v14
# explicit list of supportedSystems is needed until aarch64-darwin becomes part of the trunk jobset
- run: nix-build pkgs/top-level/release.nix -A tarball.nixpkgs-basic-release-checks --arg supportedSystems '[ "aarch64-darwin" "aarch64-linux" "x86_64-linux" "x86_64-darwin" ]'

@ -28,7 +28,7 @@ jobs:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
if: env.PR_DIFF
- uses: cachix/install-nix-action@v13
- uses: cachix/install-nix-action@v14
if: env.PR_DIFF
with:
# nixpkgs commit is pinned so that it doesn't break

@ -18,7 +18,7 @@ jobs:
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v13
- uses: cachix/install-nix-action@v14
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true

@ -18,7 +18,7 @@ jobs:
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v13
- uses: cachix/install-nix-action@v14
with:
# explicitly enable sandbox
extra_nix_config: sandbox = true

@ -19,7 +19,7 @@ jobs:
with:
# pull_request_target checks out the base branch by default
ref: refs/pull/${{ github.event.pull_request.number }}/merge
- uses: cachix/install-nix-action@v12
- uses: cachix/install-nix-action@v14
- name: Check DocBook files generated from Markdown are consistent
run: |
nixos/doc/manual/md-to-db.sh

@ -28,12 +28,16 @@ jobs:
pairs:
- from: master
into: haskell-updates
- from: release-21.05
into: staging-next-21.05
- from: staging-next-21.05
into: staging-21.05
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v2
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@v1.3.1
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}

@ -30,16 +30,12 @@ jobs:
into: staging-next
- from: staging-next
into: staging
- from: release-21.05
into: staging-next-21.05
- from: staging-next-21.05
into: staging-21.05
name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
steps:
- uses: actions/checkout@v2
- name: ${{ matrix.pairs.from }} → ${{ matrix.pairs.into }}
uses: devmasx/merge-branch@v1.3.1
uses: devmasx/merge-branch@1.4.0
with:
type: now
from_branch: ${{ matrix.pairs.from }}

@ -2,6 +2,7 @@
,*
.*.swp
.*.swo
.idea/
result
result-*
/doc/NEWS.html

@ -3,12 +3,17 @@ MD_TARGETS=$(addsuffix .xml, $(basename $(shell find . -type f -regex '.*\.md$$'
PANDOC ?= pandoc
pandoc_media_dir = media
# NOTE: Keep in sync with NixOS manual (/nixos/doc/manual/md-to-db.sh).
# NOTE: Keep in sync with NixOS manual (/nixos/doc/manual/md-to-db.sh) and conversion script (/maintainers/scripts/db-to-md.sh).
# TODO: Remove raw-attribute when we can get rid of DocBook altogether.
pandoc_commonmark_enabled_extensions = +attributes+fenced_divs+footnotes+bracketed_spans+definition_lists+pipe_tables+raw_attribute
# Not needed:
# - docbook-reader/citerefentry-to-rst-role.lua (only relevant for DocBook → MarkDown/rST/MyST)
pandoc_flags = --extract-media=$(pandoc_media_dir) \
--lua-filter=$(PANDOC_LUA_FILTERS_DIR)/diagram-generator.lua \
--lua-filter=labelless-link-is-xref.lua \
--lua-filter=build-aux/pandoc-filters/myst-reader/roles.lua \
--lua-filter=build-aux/pandoc-filters/link-unix-man-references.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/rst-roles.lua \
--lua-filter=build-aux/pandoc-filters/docbook-writer/labelless-link-is-xref.lua \
-f commonmark$(pandoc_commonmark_enabled_extensions)+smart
.PHONY: all

@ -0,0 +1,23 @@
--[[
Converts Code AST nodes produced by pandocs DocBook reader
from citerefentry elements into AST for corresponding role
for reStructuredText.
We use subset of MyST syntax (CommonMark with features from rST)
so lets use the rST AST for rST features.
Reference: https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage
]]
function Code(elem)
elem.classes = elem.classes:map(function (x)
if x == 'citerefentry' then
elem.attributes['role'] = 'manpage'
return 'interpreted-text'
else
return x
end
end)
return elem
end

@ -1,3 +1,13 @@
--[[
Converts Link AST nodes with empty label to DocBook xref elements.
This is a temporary script to be able use cross-references conveniently
using syntax taken from MyST, while we still use docbook-xsl
for generating the documentation.
Reference: https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing
]]
local function starts_with(start, str)
return str:sub(1, #start) == start
end

@ -0,0 +1,36 @@
--[[
Converts AST for reStructuredText roles into corresponding
DocBook elements.
Currently, only a subset of roles is supported.
Reference:
List of roles:
https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html
manpage:
https://tdg.docbook.org/tdg/5.1/citerefentry.html
file:
https://tdg.docbook.org/tdg/5.1/filename.html
]]
function Code(elem)
if elem.classes:includes('interpreted-text') then
local tag = nil
local content = elem.text
if elem.attributes['role'] == 'manpage' then
tag = 'citerefentry'
local title, volnum = content:match('^(.+)%((%w+)%)$')
if title == nil then
-- No volnum in parentheses.
title = content
end
content = '<refentrytitle>' .. title .. '</refentrytitle>' .. (volnum ~= nil and ('<manvolnum>' .. volnum .. '</manvolnum>') or '')
elseif elem.attributes['role'] == 'file' then
tag = 'filename'
end
if tag ~= nil then
return pandoc.RawInline('docbook', '<' .. tag .. '>' .. content .. '</' .. tag .. '>')
end
end
end

@ -0,0 +1,18 @@
--[[
Turns a manpage reference into a link, when a mapping is defined
in the unix-man-urls.lua file.
]]
local man_urls = {
["tmpfiles.d(5)"] = "https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html",
["nix.conf(5)"] = "https://nixos.org/manual/nix/stable/#sec-conf-file",
["systemd.time(7)"] = "https://www.freedesktop.org/software/systemd/man/systemd.time.html",
["systemd.timer(5)"] = "https://www.freedesktop.org/software/systemd/man/systemd.timer.html",
}
function Code(elem)
local is_man_role = elem.classes:includes('interpreted-text') and elem.attributes['role'] == 'manpage'
if is_man_role and man_urls[elem.text] ~= nil then
return pandoc.Link(elem, man_urls[elem.text])
end
end

@ -0,0 +1,29 @@
--[[
Replaces Str AST nodes containing {role}, followed by a Code node
by a Code node with attrs that would be produced by rST reader
from the role syntax.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Inlines(inlines)
for i = #inlines-1,1,-1 do
local first = inlines[i]
local second = inlines[i+1]
local correct_tags = first.tag == 'Str' and second.tag == 'Code'
if correct_tags then
-- docutils supports alphanumeric strings separated by [-._:]
-- We are slightly more liberal for simplicity.
local role = first.text:match('^{([-._+:%w]+)}$')
if role ~= nil then
inlines:remove(i)
second.attributes['role'] = role
second.classes:insert('interpreted-text')
end
end
end
return inlines
end

@ -0,0 +1,25 @@
--[[
Replaces Code nodes with attrs that would be produced by rST reader
from the role syntax by a Str AST node containing {role}, followed by a Code node.
This is to emulate MyST syntax in Pandoc.
(MyST is a CommonMark flavour with rST features mixed in.)
Reference: https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point
]]
function Code(elem)
local role = elem.attributes['role']
if elem.classes:includes('interpreted-text') and role ~= nil then
elem.classes = elem.classes:filter(function (c)
return c ~= 'interpreted-text'
end)
elem.attributes['role'] = nil
return {
pandoc.Str('{' .. role .. '}'),
elem,
}
end
end

@ -1,8 +1,16 @@
# Fetchers {#chap-pkgs-fetchers}
When using Nix, you will frequently need to download source code and other files from the internet. Nixpkgs comes with a few helper functions that allow you to fetch fixed-output derivations in a structured way.
When using Nix, you will frequently need to download source code and other files from the internet. For this purpose, Nix provides the [_fixed output derivation_](https://nixos.org/manual/nix/stable/#fixed-output-drvs) feature and Nixpkgs provides various functions that implement the actual fetching from various protocols and services.
The two fetcher primitives are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
## Caveats
Because fixed output derivations are _identified_ by their hash, a common mistake is to update a fetcher's URL or a version parameter, without updating the hash. **This will cause the old contents to be used.** So remember to always invalidate the hash argument.
For those who develop and maintain fetchers, a similar problem arises with changes to the implementation of a fetcher. These may cause a fixed output derivation to fail, but won't normally be caught by tests because the supposed output is already in the store or cache. For the purpose of testing, you can use a trick that is embodied by the [`invalidateFetcherByDrvHash`](#sec-pkgs-invalidateFetcherByDrvHash) function. It uses the derivation `name` to create a unique output path per fetcher implementation, defeating the caching precisely where it would be harmful.
## `fetchurl` and `fetchzip` {#fetchurl}
Two basic fetchers are `fetchurl` and `fetchzip`. Both of these have two required arguments, a URL and a hash. The hash is typically `sha256`, although many more hash algorithms are supported. Nixpkgs contributors are currently recommended to use `sha256`. This hash will be used by Nix to identify your source. A typical usage of fetchurl is provided below.
```nix
{ stdenv, fetchurl }:
@ -20,7 +28,7 @@ The main difference between `fetchurl` and `fetchzip` is in how they store the c
`fetchpatch` works very similarly to `fetchurl` with the same arguments expected. It expects patch files as a source and performs normalization on them before computing the checksum. For example it will remove comments or other unstable parts that are sometimes added by version control systems and can change over time.
Other fetcher functions allow you to add source code directly from a VCS such as subversion or git. These are mostly straightforward nambes based on the name of the command used with the VCS system. Because they give you a working repository, they act most like `fetchzip`.
Most other fetchers return a directory rather than a single file.
## `fetchsvn` {#fetchsvn}

@ -0,0 +1,18 @@
# /etc files {#etc}
Certain calls in glibc require access to runtime files found in /etc such as `/etc/protocols` or `/etc/services` -- [getprotobyname](https://linux.die.net/man/3/getprotobyname) is one such function.
On non-NixOS distributions these files are typically provided by packages (i.e. [netbase](https://packages.debian.org/sid/netbase)) if not already pre-installed in your distribution. This can cause non-reproducibility for code if they rely on these files being present.
If [iana-etc](https://hydra.nixos.org/job/nixos/trunk-combined/nixpkgs.iana-etc.x86_64-linux) is part of your _buildInputs_ then it will set the environment varaibles `NIX_ETC_PROTOCOLS` and `NIX_ETC_SERVICES` to the corresponding files in the package through a _setup-hook_.
```bash
> nix-shell -p iana-etc
[nix-shell:~]$ env | grep NIX_ETC
NIX_ETC_SERVICES=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/services
NIX_ETC_PROTOCOLS=/nix/store/aj866hr8fad8flnggwdhrldm0g799ccz-iana-etc-20210225/etc/protocols
```
Nixpkg's version of [glibc](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/libraries/glibc/default.nix) has been patched to check for the existence of these environment variables. If the environment variable are *not set*, then it will attempt to find the files at the default location within _/etc_.

@ -17,6 +17,7 @@
<xi:include href="kakoune.section.xml" />
<xi:include href="linux.section.xml" />
<xi:include href="locales.section.xml" />
<xi:include href="etc-files.section.xml" />
<xi:include href="nginx.section.xml" />
<xi:include href="opengl.section.xml" />
<xi:include href="shell-helpers.section.xml" />

@ -16,7 +16,7 @@ How to add a new (major) version of the Linux kernel to Nixpkgs:
1. Copy the old Nix expression (e.g. `linux-2.6.21.nix`) to the new one (e.g. `linux-2.6.22.nix`) and update it.
2. Add the new kernel to `all-packages.nix` (e.g., create an attribute `kernel_2_6_22`).
2. Add the new kernel to the `kernels` attribute set in `linux-kernels.nix` (e.g., create an attribute `kernel_2_6_22`).
3. Now we’re going to update the kernel configuration. First unpack the kernel. Then for each supported platform (`i686`, `x86_64`, `uml`) do the following:
@ -36,6 +36,6 @@ How to add a new (major) version of the Linux kernel to Nixpkgs:
5. Copy `.config` over the new config file (e.g. `config-2.6.22-i686-smp`).
4. Test building the kernel: `nix-build -A kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
4. Test building the kernel: `nix-build -A linuxKernel.kernels.kernel_2_6_22`. If it compiles, ship it! For extra credit, try booting NixOS with it.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `all-packages.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.
5. It may be that the new kernel requires updating the external kernel modules and kernel-dependent packages listed in the `linuxPackagesFor` function in `linux-kernels.nix` (such as the NVIDIA drivers, AUFS, etc.). If the updated packages aren’t backwards compatible with older kernels, you may need to keep the older versions around.

@ -7,4 +7,5 @@
</para>
<xi:include href="special/fhs-environments.section.xml" />
<xi:include href="special/mkshell.section.xml" />
<xi:include href="special/invalidateFetcherByDrvHash.section.xml" />
</chapter>

@ -0,0 +1,31 @@
## `invalidateFetcherByDrvHash` {#sec-pkgs-invalidateFetcherByDrvHash}
Use the derivation hash to invalidate the output via name, for testing.
Type: `(a@{ name, ... } -> Derivation) -> a -> Derivation`
Normally, fixed output derivations can and should be cached by their output
hash only, but for testing we want to re-fetch everytime the fetcher changes.
Changes to the fetcher become apparent in the drvPath, which is a hash of
how to fetch, rather than a fixed store path.
By inserting this hash into the name, we can make sure to re-run the fetcher
every time the fetcher changes.
This relies on the assumption that Nix isn't clever enough to reuse its
database of local store contents to optimize fetching.
You might notice that the "salted" name derives from the normal invocation,
not the final derivation. `invalidateFetcherByDrvHash` has to invoke the fetcher
function twice: once to get a derivation hash, and again to produce the final
fixed output derivation.
Example:
tests.fetchgit = invalidateFetcherByDrvHash fetchgit {
name = "nix-source";
url = "https://github.com/NixOS/nix";
rev = "9d9dbe6ed05854e03811c361a3380e09183f4f4a";
sha256 = "sha256-7DszvbCNTjpzGRmpIVAWXk20P0/XTrWZ79KSOGLrUWY=";
};

@ -181,6 +181,21 @@
rev = "${version}";
```
- Filling lists condionally _should_ be done with `lib.optional(s)` instead of using `if cond then [ ... ] else null` or `if cond then [ ... ] else [ ]`.
```nix
buildInputs = lib.optional stdenv.isDarwin iconv;
```
instead of
```nix
buildInputs = if stdenv.isDarwin then [ iconv ] else null;
```
As an exception, an explicit conditional expression with null can be used when fixing a important bug without triggering a mass rebuild.
If this is done a follow up pull request _should_ be created to change the code to `lib.optional(s)`.
- Arguments should be listed in the order they are used, with the exception of `lib`, which always goes first.
## Package naming {#sec-package-naming}

@ -52,6 +52,13 @@ Additionally, the following syntax extensions are currently used:
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/using/syntax.html#targets-and-cross-referencing).
- []{#ssec-contributing-markup-inline-roles}
If you want to link to a man page, you can use `` {manpage}`nix.conf(5)` ``, which will turn into {manpage}`nix.conf(5)`.
The references will turn into links when a mapping exists in {file}`doc/build-aux/pandoc-filters/unix-man-urls.lua`.
This syntax is taken from [MyST](https://myst-parser.readthedocs.io/en/latest/syntax/syntax.html#roles-an-in-line-extension-point). Though, the feature originates from [reStructuredText](https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#role-manpage) with slightly different syntax.
- []{#ssec-contributing-markup-admonitions}
**Admonitions**, set off from the text to bring attention to something.

@ -158,7 +158,23 @@ This can be overridden.
By default, Agda sources are files ending on `.agda`, or literate Agda files ending on `.lagda`, `.lagda.tex`, `.lagda.org`, `.lagda.md`, `.lagda.rst`. The list of recognised Agda source extensions can be extended by setting the `extraExtensions` config variable.
## Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs}
## Maintaining the Agda package set on Nixpkgs {#maintaining-the-agda-package-set-on-nixpkgs}
We are aiming at providing all common Agda libraries as packages on `nixpkgs`,
and keeping them up to date.
Contributions and maintenance help is always appreciated,
but the maintenance effort is typically low since the Agda ecosystem is quite small.
The `nixpkgs` Agda package set tries to take up a role similar to that of [Stackage](https://www.stackage.org/) in the Haskell world.
It is a curated set of libraries that:
1. Always work together.
2. Are as up-to-date as possible.
While the Haskell ecosystem is huge, and Stackage is highly automatised,
the Agda package set is small and can (still) be maintained by hand.
### Adding Agda packages to Nixpkgs {#adding-agda-packages-to-nixpkgs}
To add an Agda package to `nixpkgs`, the derivation should be written to `pkgs/development/libraries/agda/${library-name}/` and an entry should be added to `pkgs/top-level/agda-packages.nix`. Here it is called in a scope with access to all other Agda libraries, so the top line of the `default.nix` can look like:
@ -192,3 +208,49 @@ mkDerivation {
This library has a file called `.agda-lib`, and so we give an empty string to `libraryFile` as nothing precedes `.agda-lib` in the filename. This file contains `name: IAL-1.3`, and so we let `libraryName = "IAL-1.3"`. This library does not use an `Everything.agda` file and instead has a Makefile, so there is no need to set `everythingFile` and we set a custom `buildPhase`.
When writing an Agda package it is essential to make sure that no `.agda-lib` file gets added to the store as a single file (for example by using `writeText`). This causes Agda to think that the nix store is a Agda library and it will attempt to write to it whenever it typechecks something. See [https://github.com/agda/agda/issues/4613](https://github.com/agda/agda/issues/4613).
In the pull request adding this library,
you can test whether it builds correctly by writing in a comment:
```
@ofborg build agdaPackages.iowa-stdlib
```
### Maintaining Agda packages
As mentioned before, the aim is to have a compatible, and up-to-date package set.
These two conditions sometimes exclude each other:
For example, if we update `agdaPackages.standard-library` because there was an upstream release,
this will typically break many reverse dependencies,
i.e. downstream Agda libraries that depend on the standard library.
In `nixpkgs` we are typically among the first to notice this,
since we have build tests in place to check this.
In a pull request updating e.g. the standard library, you should write the following comment:
```
@ofborg build agdaPackages.standard-library.passthru.tests
```
This will build all reverse dependencies of the standard library,
for example `agdaPackages.agda-categories`, or `agdaPackages.generic`.
In some cases it is useful to build _all_ Agda packages.
This can be done with the following Github comment:
```
@ofborg build agda.passthru.tests.allPackages
```
Sometimes, the builds of the reverse dependencies fail because they have not yet been updated and released.
You should drop the maintainers a quick issue notifying them of the breakage,
citing the build error (which you can get from the ofborg logs).
If you are motivated, you might even send a pull request that fixes it.
Usually, the maintainers will answer within a week or two with a new release.
Bumping the version of that reverse dependency should be a further commit on your PR.
In the rare case that a new release is not to be expected within an acceptable time,
simply mark the broken package as broken by setting `meta.broken = true;`.
This will exclude it from the build test.
It can be added later when it is fixed,
and does not hinder the advancement of the whole package set in the meantime.

@ -68,74 +68,128 @@ Erlang.mk functions similarly to Rebar3, except we use `buildErlangMk` instead o
`mixRelease` is used to make a release in the mix sense. Dependencies will need to be fetched with `fetchMixDeps` and passed to it.
#### mixRelease - Elixir Phoenix example {#mixrelease---elixir-phoenix-example}
#### mixRelease - Elixir Phoenix example {#mix-release-elixir-phoenix-example}
Here is how your `default.nix` file would look.
there are 3 steps, frontend dependencies (javascript), backend dependencies (elixir) and the final derivation that puts both of those together
##### mixRelease - Frontend dependencies (javascript) {#mix-release-javascript-deps}
for phoenix projects, inside of nixpkgs you can either use yarn2nix (mkYarnModule) or node2nix. An example with yarn2nix can be found [here](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/web-apps/plausible/default.nix#L39). An example with node2nix will follow. To package something outside of nixpkgs, you have alternatives like [npmlock2nix](https://github.com/nix-community/npmlock2nix) or [nix-npm-buildpackage](https://github.com/serokell/nix-npm-buildpackage)
##### mixRelease - backend dependencies (mix) {#mix-release-mix-deps}
There are 2 ways to package backend dependencies. With mix2nix and with a fixed-output-derivation (FOD).
###### mix2nix {#mix2nix}
mix2nix is a cli tool available in nixpkgs. it will generate a nix expression from a mix.lock file. It is quite standard in the 2nix tool series.
Note that currently mix2nix can't handle git dependencies inside the mix.lock file. If you have git dependencies, you can either add them manually (see [example](https://github.com/NixOS/nixpkgs/blob/master/pkgs/servers/pleroma/default.nix#L20)) or use the FOD method.
The advantage of using mix2nix is that nix will know your whole dependency graph. On a dependency update, this won't trigger a full rebuild and download of all the dependencies, where FOD will do so.
practical steps:
- run `mix2nix > mix_deps.nix` in the upstream repo.
- pass `mixNixDeps = with pkgs; import ./mix_deps.nix { inherit lib beamPackages; };` as an argument to mixRelease.
If there are git depencencies.
- You'll need to fix the version artificially in mix.exs and regenerate the mix.lock with fixed version (on upstream). This will enable you to run `mix2nix > mix_deps.nix`.
- From the mix_deps.nix file, remove the dependencies that had git versions and pass them as an override to the import function.
```nix
with import <nixpkgs> { };
mixNixDeps = import ./mix.nix {
inherit beamPackages lib;
overrides = (final: prev: {
# mix2nix does not support git dependencies yet,
# so we need to add them manually
prometheus_ex = beamPackages.buildMix rec {
name = "prometheus_ex";
version = "3.0.5";
# Change the argument src with the git src that you actually need
src = fetchFromGitLab {
domain = "git.pleroma.social";
group = "pleroma";
owner = "elixir-libraries";
repo = "prometheus.ex";
rev = "a4e9beb3c1c479d14b352fd9d6dd7b1f6d7deee5";
sha256 = "1v0q4bi7sb253i8q016l7gwlv5562wk5zy3l2sa446csvsacnpjk";
};
# you can re-use the same beamDeps argument as generated
beamDeps = with final; [ prometheus ];
};
});
};
```
let
packages = beam.packagesWith beam.interpreters.erlang;
src = builtins.fetchgit {
url = "ssh://git@github.com/your_id/your_repo";
rev = "replace_with_your_commit";
};
You will need to run the build process once to fix the sha256 to correspond to your new git src.
pname = "your_project";
version = "0.0.1";
mixEnv = "prod";
###### FOD {#fixed-output-derivation}
mixFodDeps = packages.fetchMixDeps {
A fixed output derivation will download mix dependencies from the internet. To ensure reproducibility, a hash will be supplied. Note that mix is relatively reproducible. An FOD generating a different hash on each run hasn't been observed (as opposed to npm where the chances are relatively high). See [elixir_ls](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/beam-modules/elixir_ls.nix) for a usage example of FOD.
Practical steps
- start with the following argument to mixRelease
```nix
mixFodDeps = fetchMixDeps {
pname = "mix-deps-${pname}";
inherit src mixEnv version;
# nix will complain and tell you the right value to replace this with
inherit src version;
sha256 = lib.fakeSha256;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
};
```
nodeDependencies = (pkgs.callPackage ./assets/default.nix { }).shell.nodeDependencies;
The first build will complain about the sha256 value, you can replace with the suggested value after that.
frontEndFiles = stdenvNoCC.mkDerivation {
pname = "frontend-${pname}";
Note that if after you've replaced the value, nix suggests another sha256, then mix is not fetching the dependencies reproducibly. An FOD will not work in that case and you will have to use mix2nix.
nativeBuildInputs = [ nodejs ];
##### mixRelease - example {#mix-release-example}
inherit version src;
Here is how your `default.nix` file would look for a phoenix project.
buildPhase = ''
cp -r ./assets $TEMPDIR
```nix
with import <nixpkgs> { };
mkdir -p $TEMPDIR/assets/node_modules/.cache
cp -r ${nodeDependencies}/lib/node_modules $TEMPDIR/assets
export PATH="${nodeDependencies}/bin:$PATH"
let
# beam.interpreters.erlangR23 is available if you need a particular version
packages = beam.packagesWith beam.interpreters.erlang;
cd $TEMPDIR/assets
webpack --config ./webpack.config.js
cd ..
'';
pname = "your_project";
version = "0.0.1";
installPhase = ''
cp -r ./priv/static $out/
'';
src = builtins.fetchgit {
url = "ssh://git@github.com/your_id/your_repo";
rev = "replace_with_your_commit";
};
outputHashAlgo = "sha256";
outputHashMode = "recursive";
# if using mix2nix you can use the mixNixDeps attribute
mixFodDeps = packages.fetchMixDeps {
pname = "mix-deps-${pname}";
inherit src version;
# nix will complain and tell you the right value to replace this with
outputHash = lib.fakeSha256;
impureEnvVars = lib.fetchers.proxyImpureEnvVars;
sha256 = lib.fakeSha256;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
};
nodeDependencies = (pkgs.callPackage ./assets/default.nix { }).shell.nodeDependencies;
in packages.mixRelease {
inherit src pname version mixEnv mixFodDeps;
inherit src pname version mixFodDeps;
# if you have build time environment variables add them here
MY_ENV_VAR="my_value";
preInstall = ''
mkdir -p ./priv/static
cp -r ${frontEndFiles} ./priv/static
postBuild = ''
ln -sf ${nodeDependencies}/lib/node_modules assets/node_modules
npm run deploy --prefix ./assets
# for external task you need a workaround for the no deps check flag
# https://github.com/phoenixframework/phoenix/issues/2690
mix do deps.loadpaths --no-deps-check, phx.digest
mix phx.digest --no-deps-check
'';
}
```
@ -165,6 +219,8 @@ in
systemd.services.${release_name} = {
wantedBy = [ "multi-user.target" ];
after = [ "network.target" "postgresql.service" ];
# note that if you are connecting to a postgres instance on a different host
# postgresql.service should not be included in the requires.
requires = [ "network-online.target" "postgresql.service" ];
description = "my app";
environment = {
@ -201,6 +257,7 @@ in
path = [ pkgs.bash ];
};
# in case you have migration scripts or you want to use a remote shell
environment.systemPackages = [ release ];
}
```
@ -215,16 +272,11 @@ Usually, we need to create a `shell.nix` file and do our development inside of t
{ pkgs ? import <nixpkgs> {} }:
with pkgs;
let
elixir = beam.packages.erlangR22.elixir_1_9;
elixir = beam.packages.erlangR24.elixir_1_12;
in
mkShell {
buildInputs = [ elixir ];
ERL_INCLUDE_PATH="${erlang}/lib/erlang/usr/include";
}
```
@ -264,6 +316,7 @@ let
# TODO: not sure how to make hex available without installing it afterwards.
mix local.hex --if-missing
export LANG=en_US.UTF-8
# keep your shell history in iex
export ERL_AFLAGS="-kernel shell_history enabled"
# postges related

@ -28,12 +28,12 @@ The recommended way of defining a derivation for a Coq library, is to use the `c
* `domain` (optional, defaults to `"github.com"`), domains including the strings `"github"` or `"gitlab"` in their names are automatically supported, otherwise, one must change the `fetcher` argument to support them (cf `pkgs/development/coq-modules/heq/default.nix` for an example),
* `releaseRev` (optional, defaults to `(v: v)`), provides a default mapping from release names to revision hashes/branch names/tags,
* `displayVersion` (optional), provides a way to alter the computation of `name` from `pname`, by explaining how to display version numbers,
* `namePrefix` (optional), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
* `namePrefix` (optional, defaults to `[ "coq" ]`), provides a way to alter the computation of `name` from `pname`, by explaining which dependencies must occur in `name`,
* `extraBuildInputs` (optional), by default `buildInputs` just contains `coq`, this allows to add more build inputs,
* `mlPlugin` (optional, defaults to `false`). Some extensions (plugins) might require OCaml and sometimes other OCaml packages. Standard dependencies can be added by setting the current option to `true`. For a finer grain control, the `coq.ocamlPackages` attribute can be used in `extraBuildInputs` to depend on the same package set Coq was built against.
* `useDune2ifVersion` (optional, default to `(x: false)` uses Dune2 to build the package if the provided predicate evaluates to true on the version, e.g. `useDune2if = versions.isGe "1.1"` will use dune if the version of the package is greater or equal to `"1.1"`,
* `useDune2` (optional, defaults to `false`) uses Dune2 to build the package if set to true, the presence of this attribute overrides the behavior of the previous one.
* `opam-name` (optional, defaults to `coq-` followed by the value of `pname`), name of the Dune package to build.
* `opam-name` (optional, defaults to concatenating with a dash separator the components of `namePrefix` and `pname`), name of the Dune package to build.
* `enableParallelBuilding` (optional, defaults to `true`), since it is activated by default, we provide a way to disable it.
* `extraInstallFlags` (optional), allows to extend `installFlags` which initializes the variable `COQMF_COQLIB` so as to install in the proper subdirectory. Indeed Coq libraries should be installed in `$(out)/lib/coq/${coq.coq-version}/user-contrib/`. Such directories are automatically added to the `$COQPATH` environment variable by the hook defined in the Coq derivation.
* `setCOQBIN` (optional, defaults to `true`), by default, the environment variable `$COQBIN` is set to the current Coq's binary, but one can disable this behavior by setting it to `false`,

@ -28,8 +28,7 @@ mkShell {
packages = [
(with dotnetCorePackages; combinePackages [
sdk_3_1
sdk_3_0
sdk_2_1
sdk_5_0
])
];
}
@ -64,12 +63,46 @@ $ dotnet --info
The `dotnetCorePackages.sdk_X_Y` is preferred over the old dotnet-sdk as both major and minor version are very important for a dotnet environment. If a given minor version isn't present (or was changed), then this will likely break your ability to build a project.
## dotnetCorePackages.sdk vs dotnetCorePackages.net vs dotnetCorePackages.netcore vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.net-vs-dotnetcorepackages.netcore-vs-dotnetcorepackages.aspnetcore}
## dotnetCorePackages.sdk vs dotnetCorePackages.runtime vs dotnetCorePackages.aspnetcore {#dotnetcorepackages.sdk-vs-dotnetcorepackages.runtime-vs-dotnetcorepackages.aspnetcore}
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `net`, `netcore` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications. For runtime versions >= .NET 5 `net` is used while `netcore` is used for older .NET Core runtime version.
The `dotnetCorePackages.sdk` contains both a runtime and the full sdk of a given version. The `runtime` and `aspnetcore` packages are meant to serve as minimal runtimes to deploy alongside already built applications.
## Packaging a Dotnet Application {#packaging-a-dotnet-application}
Ideally, we would like to build against the sdk, then only have the dotnet runtime available in the runtime closure.
To package Dotnet applications, you can use `buildDotnetModule`. This has similar arguments to `stdenv.mkDerivation`, with the following additions:
* `projectFile` has to be used for specifying the dotnet project file relative to the source root. These usually have `.sln` or `.csproj` file extensions.
* `nugetDeps` has to be used to specify the NuGet dependency file. Unfortunately, these cannot be deterministically fetched without a lockfile. This file should be generated using `nuget-to-nix` tool, which is available in nixpkgs.
* `executables` is used to specify which executables get wrapped to `$out/bin`, relative to `$out/lib/$pname`. If this is unset, all executables generated will get installed. If you do not want to install any, set this to `[]`.
* `runtimeDeps` is used to wrap libraries into `LD_LIBRARY_PATH`. This is how dotnet usually handles runtime dependencies.
* `buildType` is used to change the type of build. Possible values are `Release`, `Debug`, etc. By default, this is set to `Release`.
* `dotnet-sdk` is useful in cases where you need to change what dotnet SDK is being used.
* `dotnet-runtime` is useful in cases where you need to change what dotnet runtime is being used.
* `dotnetRestoreFlags` can be used to pass flags to `dotnet restore`.
* `dotnetBuildFlags` can be used to pass flags to `dotnet build`.
* `dotnetInstallFlags` can be used to pass flags to `dotnet install`.
* `dotnetFlags` can be used to pass flags to all of the above phases.
Here is an example `default.nix`, using some of the previously discussed arguments:
```nix
{ lib, buildDotnetModule, dotnetCorePackages, ffmpeg }:
buildDotnetModule rec {
pname = "someDotnetApplication";
version = "0.1";
src = ./.;
projectFile = "src/project.sln";
nugetDeps = ./deps.nix; # File generated with `nuget-to-nix path/to/src > deps.nix`.
TODO: Create closure-friendly way to package dotnet applications
dotnet-sdk = dotnetCorePackages.sdk_3_1;
dotnet-runtime = dotnetCorePackages.net_5_0;
dotnetFlags = [ "--runtime linux-x64" ];
executables = [ "foo" ]; # This wraps "$out/lib/$pname/foo" to `$out/bin/foo`.
executables = []; # Don't install any executables.
runtimeDeps = [ ffmpeg ]; # This will wrap ffmpeg's library path into `LD_LIBRARY_PATH`.
}
```

@ -12,6 +12,7 @@
<xi:include href="coq.section.xml" />
<xi:include href="crystal.section.xml" />
<xi:include href="dhall.section.xml" />
<xi:include href="dotnet.section.xml" />
<xi:include href="emscripten.section.xml" />
<xi:include href="gnome.section.xml" />
<xi:include href="go.section.xml" />
@ -23,7 +24,9 @@
<xi:include href="javascript.section.xml" />
<xi:include href="lua.section.xml" />
<xi:include href="maven.section.xml" />
<xi:include href="nim.section.xml" />
<xi:include href="ocaml.section.xml" />
<xi:include href="octave.section.xml" />
<xi:include href="perl.section.xml" />
<xi:include href="php.section.xml" />
<xi:include href="python.section.xml" />

@ -72,6 +72,15 @@ in
...
```
You can also specify what JDK your JRE should be based on, for example
selecting a 'headless' build to avoid including a link to GTK+:
```nix
my_jre = pkgs.jre_minimal.override {
jdk = jdk11_headless;
};
```
Note all JDKs passthru `home`, so if your application requires
environment variables like `JAVA_HOME` being set, that can be done in a
generic fashion with the `--set` argument of `makeWrapper`:

@ -2,7 +2,7 @@
## Introduction {#javascript-introduction}
This contains instructions on how to package javascript applications. For instructions on how to add a cli package from npm please consult the #node.js section
This contains instructions on how to package javascript applications.
The various tools available will be listed in the [tools-overview](#javascript-tools-overview). Some general principles for packaging will follow. Finally some tool specific instructions will be given.

@ -0,0 +1,91 @@
# Nim {#nim}
## Overview {#nim-overview}
The Nim compiler, a builder function, and some packaged libraries are available
in Nixpkgs. Until now each compiler release has been effectively backwards
compatible so only the latest version is available.
## Nim program packages in Nixpkgs {#nim-program-packages-in-nixpkgs}
Nim programs can be built using `nimPackages.buildNimPackage`. In the
case of packages not containing exported library code the attribute
`nimBinOnly` should be set to `true`.
The following example shows a Nim program that depends only on Nim libraries:
```nix
{ lib, nimPackages, fetchurl }:
nimPackages.buildNimPackage rec {
pname = "hottext";
version = "1.4";
nimBinOnly = true;
src = fetchurl {
url = "https://git.sr.ht/~ehmry/hottext/archive/v${version}.tar.gz";
sha256 = "sha256-hIUofi81zowSMbt1lUsxCnVzfJGN3FEiTtN8CEFpwzY=";
};
buildInputs = with nimPackages; [
bumpy
chroma
flatty
nimsimd
pixie
sdl2
typography
vmath
zippy
];
}
```
## Nim library packages in Nixpkgs {#nim-library-packages-in-nixpkgs}
Nim libraries can also be built using `nimPackages.buildNimPackage`, but
often the product of a fetcher is sufficient to satisfy a dependency.
The `fetchgit`, `fetchFromGitHub`, and `fetchNimble` functions yield an
output that can be discovered during the `configurePhase` of `buildNimPackage`.
Nim library packages are listed in
[pkgs/top-level/nim-packages.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/nim-packages.nix) and implemented at
[pkgs/development/nim-packages](https://github.com/NixOS/nixpkgs/tree/master/pkgs/development/nim-packages).
The following example shows a Nim library that propagates a dependency on a
non-Nim package:
```nix
{ lib, buildNimPackage, fetchNimble, SDL2 }:
buildNimPackage rec {
pname = "sdl2";
version = "2.0.4";
src = fetchNimble {
inherit pname version;
hash = "sha256-Vtcj8goI4zZPQs2TbFoBFlcR5UqDtOldaXSH/+/xULk=";
};
propagatedBuildInputs = [ SDL2 ];
}
```
## `buildNimPackage` parameters {#buildnimpackage-parameters}
All parameters from `stdenv.mkDerivation` function are still supported. The
following are specific to `buildNimPackage`:
* `nimBinOnly ? false`: If `true` then build only the programs listed in
the Nimble file in the packages sources.
* `nimbleFile`: Specify the Nimble file location of the package being built
rather than discover the file at build-time.
* `nimRelease ? true`: Build the package in *release* mode.
* `nimDefines ? []`: A list of Nim defines. Key-value tuples are not supported.
* `nimFlags ? []`: A list of command line arguments to pass to the Nim compiler.
Use this to specify defines with arguments in the form of `-d:${name}=${value}`.
* `nimDoc` ? false`: Build and install HTML documentation.
* `buildInputs` ? []: The packages listed here will be searched for `*.nimble`
files which are used to populate the Nim library path. Otherwise the standard
behavior is in effect.

@ -0,0 +1,100 @@
# Octave {#sec-octave}
## Introduction {#ssec-octave-introduction}
Octave is a modular scientific programming language and environment.
A majority of the packages supported by Octave from their [website](https://octave.sourceforge.io/packages.php) are packaged in nixpkgs.
## Structure {#ssec-octave-structure}
All Octave add-on packages are available in two ways:
1. Under the top-level `Octave` attribute, `octave.pkgs`.
2. As a top-level attribute, `octavePackages`.
## Packaging Octave Packages {#ssec-octave-packaging}
Nixpkgs provides a function `buildOctavePackage`, a generic package builder function for any Octave package that complies with the Octave's current packaging format.
All Octave packages are defined in [pkgs/top-level/octave-packages.nix](https://github.com/NixOS/nixpkgs/blob/master/pkgs/top-level/octave-packages.nix) rather than `pkgs/all-packages.nix`.
Each package is defined in their own file in the [pkgs/development/octave-modules](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/octave-modules) directory.
Octave packages are made available through `all-packages.nix` through both the attribute `octavePackages` and `octave.pkgs`.
You can test building an Octave package as follows:
```ShellSession
$ nix-build -A octavePackages.symbolic
```
When building Octave packages with `nix-build`, the `buildOctavePackage` function adds `octave-octaveVersion` to; the start of the package's name attribute.
This can be required when installing the package using `nix-env`:
```ShellSession
$ nix-env -i octave-6.2.0-symbolic
```
Although, you can also install it using the attribute name:
```ShellSession
$ nix-env -i -A octavePackages.symbolic
```
You can build Octave with packages by using the `withPackages` passed-through function.
```ShellSession
$ nix-shell -p 'octave.withPackages (ps: with ps; [ symbolic ])'
```
This will also work in a `shell.nix` file.
```nix
{ pkgs ? import <nixpkgs> { }}:
pkgs.mkShell {
nativeBuildInputs = with pkgs; [
(octave.withPackages (opkgs: with opkgs; [ symbolic ]))
];
}
```
### `buildOctavePackage` Steps {#sssec-buildOctavePackage-steps}
The `buildOctavePackage` does several things to make sure things work properly.
1. Sets the environment variable `OCTAVE_HISTFILE` to `/dev/null` during package compilation so that the commands run through the Octave interpreter directly are not logged.
2. Skips the configuration step, because the packages are stored as gzipped tarballs, which Octave itself handles directly.
3. Change the hierarchy of the tarball so that only a single directory is at the top-most level of the tarball.
4. Use Octave itself to run the `pkg build` command, which unzips the tarball, extracts the necessary files written in Octave, and compiles any code written in C++ or Fortran, and places the fully compiled artifact in `$out`.
`buildOctavePackage` is built on top of `stdenv` in a standard way, allowing most things to be customized.
### Handling Dependencies {#sssec-octave-handling-dependencies}
In Octave packages, there are four sets of dependencies that can be specified:
`nativeBuildInputs`
: Just like other packages, `nativeBuildInputs` is intended for architecture-dependent build-time-only dependencies.
`buildInputs`
: Like other packages, `buildInputs` is intended for architecture-independent build-time-only dependencies.
`propagatedBuildInputs`
: Similar to other packages, `propagatedBuildInputs` is intended for packages that are required for both building and running of the package.
See [Symbolic](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/octave-modules/symbolic/default.nix) for how this works and why it is needed.
`requiredOctavePackages`
: This is a special dependency that ensures the specified Octave packages are dependent on others, and are made available simultaneously when loading them in Octave.
### Installing Octave Packages {#sssec-installing-octave-packages}
By default, the `buildOctavePackage` function does _not_ install the requested package into Octave for use.
The function will only build the requested package.
This is due to Octave maintaining an text-based database about which packages are installed where.
To this end, when all the requested packages have been built, the Octave package and all its add-on packages are put together into an environment, similar to Python.
1. First, all the Octave binaries are wrapped with the environment variable `OCTAVE_SITE_INITFILE` set to a file in `$out`, which is required for Octave to be able to find the non-standard package database location.
2. Because of the way `buildEnv` works, all tarballs that are present (which should be all Octave packages to install) should be removed.
3. The path down to the default install location of Octave packages is recreated so that Nix-operated Octave can install the packages.
4. Install the packages into the `$out` environment while writing package entries to the database file.
This database file is unique for each different (according to Nix) environment invocation.
5. Rewrite the Octave-wide startup file to read from the list of packages installed in that particular environment.
6. Wrap any programs that are required by the Octave packages so that they work with all the paths defined within the environment.

@ -1513,7 +1513,7 @@ If you need to change a package's attribute(s) from `configuration.nix` you coul
python = super.python.override {
packageOverrides = python-self: python-super: {
twisted = python-super.twisted.overrideAttrs (oldAttrs: {
src = super.fetchPipy {
src = super.fetchPypi {
pname = "twisted";
version = "19.10.0";
sha256 = "7394ba7f272ae722a74f3d969dcf599bc4ef093bc392038748a490f1724a515d";

@ -96,6 +96,11 @@ re-enter the shell.
## Updating the package set {#updating-the-package-set}
There is a script and associated environment for regenerating the package
sets and synchronising the rPackages tree to the current CRAN and matching
BIOC release. These scripts are found in the `pkgs/development/r-modules`
directory and executed as follows:
```bash
nix-shell generate-shell.nix
@ -112,12 +117,11 @@ Rscript generate-r-packages.R bioc-experiment > bioc-experiment-packages.nix.new
mv bioc-experiment-packages.nix.new bioc-experiment-packages.nix
```
`generate-r-packages.R <repo>` reads `<repo>-packages.nix`, therefor the renaming.
## Testing if the Nix-expression could be evaluated {#testing-if-the-nix-expression-could-be-evaluated}
```bash
nix-build test-evaluation.nix --dry-run
```
`generate-r-packages.R <repo>` reads `<repo>-packages.nix`, therefore
the renaming.
If this exits fine, the expression is ok. If not, you have to edit `default.nix`
Some packages require overrides to specify external dependencies or other
patches and special requirements. These overrides are specified in the
`pkgs/development/r-modules/default.nix` file. As the `*-packages.nix`
contents are automatically generated it should not be edited and broken
builds should be addressed using overrides.

@ -20,7 +20,7 @@ or use Mozilla's [Rust nightlies overlay](#using-the-rust-nightlies-overlay).
Rust applications are packaged by using the `buildRustPackage` helper from `rustPlatform`:
```nix
{ lib, rustPlatform }:
{ lib, fetchFromGitHub, rustPlatform }:
rustPlatform.buildRustPackage rec {
pname = "ripgrep";
@ -116,22 +116,44 @@ is updated after every change to `Cargo.lock`. Therefore,
a `Cargo.lock` file using the `cargoLock` argument. For example:
```nix
rustPlatform.buildRustPackage rec {
rustPlatform.buildRustPackage {
pname = "myproject";
version = "1.0.0";
cargoLock = {
lockFile = ./Cargo.lock;
}
};
# ...
}
```
This will retrieve the dependencies using fixed-output derivations from
the specified lockfile. Note that setting `cargoLock.lockFile` doesn't
add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still required
to build a rust package. A simple fix is to use:
the specified lockfile.
One caveat is that `Cargo.lock` cannot be patched in the `patchPhase`
because it runs after the dependencies have already been fetched. If
you need to patch or generate the lockfile you can alternatively set
`cargoLock.lockFileContents` to a string of its contents:
```nix
rustPlatform.buildRustPackage {
pname = "myproject";
version = "1.0.0";
cargoLock = let
fixupLockFile = path: f (builtins.readFile path);
in {
lockFileContents = fixupLockFile ./Cargo.lock;
};
# ...
}
```
Note that setting `cargoLock.lockFile` or `cargoLock.lockFileContents`
doesn't add a `Cargo.lock` to your `src`, and a `Cargo.lock` is still
required to build a rust package. A simple fix is to use:
```nix
postPatch = ''
@ -215,22 +237,6 @@ where they are known to differ. But there are ways to customize the argument:
--target /nix/store/asdfasdfsadf-thumb-crazy.json # contains {"foo":"","bar":""}
```
Finally, as an ad-hoc escape hatch, a computed target (string or JSON file
path) can be passed directly to `buildRustPackage`:
```nix
pkgs.rustPlatform.buildRustPackage {
/* ... */
target = "x86_64-fortanix-unknown-sgx";
}
```
This is useful to avoid rebuilding Rust tools, since they are actually target
agnostic and don't need to be rebuilt. But in the future, we should always
build the Rust tools and standard library crates separately so there is no
reason not to take the `stdenv.hostPlatform.rustc`-modifying approach, and the
ad-hoc escape hatch to `buildRustPackage` can be removed.
Note that currently custom targets aren't compiled with `std`, so `cargo test`
will fail. This can be ignored by adding `doCheck = false;` to your derivation.

@ -79,7 +79,7 @@ A commonly adopted convention in `nixpkgs` is that executables provided by the p
The `glibc` package is a deliberate single exception to the “binaries first” convention. The `glibc` has `libs` as its first output allowing the libraries provided by `glibc` to be referenced directly (e.g. `${stdenv.glibc}/lib/ld-linux-x86-64.so.2`). The executables provided by `glibc` can be accessed via its `bin` attribute (e.g. `${stdenv.glibc.bin}/bin/ldd`).
The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf/blob/master/README) for more details).
The reason for why `glibc` deviates from the convention is because referencing a library provided by `glibc` is a very common operation among Nix packages. For instance, third-party executables packaged by Nix are typically patched and relinked with the relevant version of `glibc` libraries from Nix packages (please see the documentation on [patchelf](https://github.com/NixOS/patchelf) for more details).
### File type groups {#multiple-output-file-type-groups}

@ -373,11 +373,11 @@ Additional file types can be supported by setting the `unpackCmd` variable (see
##### `srcs` / `src` {#var-stdenv-src}
The list of source files or directories to be unpacked or copied. One of these must be set.
The list of source files or directories to be unpacked or copied. One of these must be set. Note that if you use `srcs`, you should also set `sourceRoot` or `setSourceRoot`.
##### `sourceRoot` {#var-stdenv-sourceRoot}
After running `unpackPhase`, the generic builder changes the current directory to the directory created by unpacking the sources. If there are multiple source directories, you should set `sourceRoot` to the name of the intended directory.
After running `unpackPhase`, the generic builder changes the current directory to the directory created by unpacking the sources. If there are multiple source directories, you should set `sourceRoot` to the name of the intended directory. Set `sourceRoot = ".";` if you use `srcs` and control the unpack phase yourself.
##### `setSourceRoot` {#var-stdenv-setSourceRoot}

@ -138,6 +138,8 @@ All programs that are built with [MPI](https://en.wikipedia.org/wiki/Message_Pas
- [MPICH](https://www.mpich.org/), attribute name `mpich`
- [MVAPICH](https://mvapich.cse.ohio-state.edu/), attribute name `mvapich`
To provide MPI enabled applications that use `MPICH`, instead of the default `Open MPI`, simply use the following overlay:
```nix

@ -11,15 +11,7 @@
lib = import ./lib;
systems = [
"x86_64-linux"
"i686-linux"
"x86_64-darwin"
"aarch64-linux"
"armv6l-linux"
"armv7l-linux"
"aarch64-darwin"
];
systems = lib.systems.supported.hydra;
forAllSystems = f: lib.genAttrs systems (system: f system);

@ -145,7 +145,8 @@ rec {
let
outputs = drv.outputs or [ "out" ];
commonAttrs = drv // (builtins.listToAttrs outputsList) //
commonAttrs = (removeAttrs drv [ "outputUnspecified" ]) //
(builtins.listToAttrs outputsList) //
({ all = map (x: x.value) outputsList; }) // passthru;
outputToAttrListElement = outputName:

@ -91,7 +91,7 @@ let
concatImapStringsSep makeSearchPath makeSearchPathOutput
makeLibraryPath makeBinPath optionalString
hasInfix hasPrefix hasSuffix stringToCharacters stringAsChars escape
escapeShellArg escapeShellArgs escapeRegex replaceChars lowerChars
escapeShellArg escapeShellArgs escapeRegex escapeXML replaceChars lowerChars
upperChars toLower toUpper addContextFrom splitString
removePrefix removeSuffix versionOlder versionAtLeast
getName getVersion
@ -123,8 +123,8 @@ let
inherit (self.options) isOption mkEnableOption mkSinkUndeclaredOptions
mergeDefaultOption mergeOneOption mergeEqualOption getValues
getFiles optionAttrSetToDocList optionAttrSetToDocList'
scrubOptionValue literalExample showOption showFiles
unknownModule mkOption;
scrubOptionValue literalExpression literalExample literalDocBook
showOption showFiles unknownModule mkOption;
inherit (self.types) isType setType defaultTypeMerge defaultFunctor
isOptionType mkOptionType;
inherit (self.asserts)

@ -35,6 +35,8 @@ rec {
("generators.mkValueStringDefault: " +
"${t} not supported: ${toPretty {} v}");
in if isInt v then toString v
# convert derivations to store paths
else if lib.isDerivation v then toString v
# we default to not quoting strings
else if isString v then v
# isString returns "1", which is not a good default
@ -169,7 +171,7 @@ rec {
# converts { a.b.c = 5; } to { "a.b".c = 5; } for toINI
gitFlattenAttrs = let
recurse = path: value:
if isAttrs value then
if isAttrs value && !lib.isDerivation value then
lib.mapAttrsToList (name: value: recurse ([ name ] ++ path) value) value
else if length path > 1 then {
${concatStringsSep "." (lib.reverseList (tail path))}.${head path} = value;
@ -195,6 +197,30 @@ rec {
*/
toYAML = {}@args: toJSON args;
withRecursion =
args@{
/* If this option is not null, the given value will stop evaluating at a certain depth */
depthLimit
/* If this option is true, an error will be thrown, if a certain given depth is exceeded */
, throwOnDepthLimit ? true
}:
assert builtins.isInt depthLimit;
let
transform = depth:
if depthLimit != null && depth > depthLimit then
if throwOnDepthLimit
then throw "Exceeded maximum eval-depth limit of ${toString depthLimit} while trying to evaluate with `generators.withRecursion'!"
else const "<unevaluated>"
else id;
mapAny = with builtins; depth: v:
let
evalNext = x: mapAny (depth + 1) (transform (depth + 1) x);
in
if isAttrs v then mapAttrs (const evalNext) v
else if isList v then map evalNext v
else transform (depth + 1) v;
in
mapAny 0;
/* Pretty print a value, akin to `builtins.trace`.
* Should probably be a builtin as well.
@ -206,7 +232,8 @@ rec {
allowPrettyValues ? false,
/* If this option is true, the output is indented with newlines for attribute sets and lists */
multiline ? true
}@args: let
}@args:
let
go = indent: v: with builtins;
let isPath = v: typeOf v == "path";
introSpace = if multiline then "\n${indent} " else " ";

@ -153,6 +153,11 @@ in mkLicense lset) ({
free = false;
};
capec = {
fullName = "Common Attack Pattern Enumeration and Classification";
url = "https://capec.mitre.org/about/termsofuse.html";
};
clArtistic = {
spdxId = "ClArtistic";
fullName = "Clarified Artistic License";
@ -240,6 +245,11 @@ in mkLicense lset) ({
fullName = "CeCILL Free Software License Agreement v2.0";
};
cecill21 = {
spdxId = "CECILL-2.1";
fullName = "CeCILL Free Software License Agreement v2.1";
};
cecill-b = {
spdxId = "CECILL-B";
fullName = "CeCILL-B Free Software License Agreement";

@ -162,13 +162,24 @@ rec {
baseMsg = "The option `${showOption (prefix ++ firstDef.prefix)}' does not exist. Definition values:${showDefs [ firstDef ]}";
in
if attrNames options == [ "_module" ]
then throw ''
${baseMsg}
However there are no options defined in `${showOption prefix}'. Are you sure you've
declared your options properly? This can happen if you e.g. declared your options in `types.submodule'
under `config' rather than `options'.
''
then
let
optionName = showOption prefix;
in
if optionName == ""
then throw ''
${baseMsg}
It seems as if you're trying to declare an option by placing it into `config' rather than `options'!
''
else
throw ''
${baseMsg}
However there are no options defined in `${showOption prefix}'. Are you sure you've
declared your options properly? This can happen if you e.g. declared your options in `types.submodule'
under `config' rather than `options'.
''
else throw baseMsg
else null;

@ -54,7 +54,7 @@ rec {
Example:
mkOption { } // => { _type = "option"; }
mkOption { defaultText = "foo"; } // => { _type = "option"; defaultText = "foo"; }
mkOption { default = "foo"; } // => { _type = "option"; default = "foo"; }
*/
mkOption =
{
@ -212,11 +212,25 @@ rec {
else x;
/* For use in the `example` option attribute. It causes the given
text to be included verbatim in documentation. This is necessary
for example values that are not simple values, e.g., functions.
/* For use in the `defaultText` and `example` option attributes. Causes the
given string to be rendered verbatim in the documentation as Nix code. This
is necessary for complex values, e.g. functions, or values that depend on
other values or packages.
*/
literalExample = text: { _type = "literalExample"; inherit text; };
literalExpression = text:
if ! isString text then throw "literalExpression expects a string."
else { _type = "literalExpression"; inherit text; };
literalExample = lib.warn "literalExample is deprecated, use literalExpression instead, or use literalDocBook for a non-Nix description." literalExpression;
/* For use in the `defaultText` and `example` option attributes. Causes the
given DocBook text to be inserted verbatim in the documentation, for when
a `literalExpression` would be too hard to read.
*/
literalDocBook = text:
if ! isString text then throw "literalDocBook expects a string."
else { _type = "literalDocBook"; inherit text; };
# Helper functions.
@ -247,7 +261,9 @@ rec {
showDefs = defs: concatMapStrings (def:
let
# Pretty print the value for display, if successful
prettyEval = builtins.tryEval (lib.generators.toPretty {} def.value);
prettyEval = builtins.tryEval
(lib.generators.toPretty { }
(lib.generators.withRecursion { depthLimit = 10; throwOnDepthLimit = false; } def.value));
# Split it into its lines
lines = filter (v: ! isList v) (builtins.split "\n" prettyEval.value);
# Only display the first 5 lines, and indent them for better visibility

@ -43,7 +43,9 @@ let
lib.hasSuffix ".o" baseName ||
lib.hasSuffix ".so" baseName ||
# Filter out nix-build result symlinks
(type == "symlink" && lib.hasPrefix "result" baseName)
(type == "symlink" && lib.hasPrefix "result" baseName) ||
# Filter out sockets and other types of files we can't have in the store.
(type == "unknown")
);
# Filters a source tree removing version control files and directories using cleanSourceWith

@ -362,6 +362,19 @@ rec {
if match "[a-zA-Z_][a-zA-Z0-9_'-]*" s != null
then s else escapeNixString s;
/* Escapes a string such that it is safe to include verbatim in an XML
document.
Type: string -> string
Example:
escapeXML ''"test" 'test' < & >''
=> "\\[\\^a-z]\\*"
*/
escapeXML = builtins.replaceStrings
["\"" "'" "<" ">" "&"]
["&quot;" "&apos;" "&lt;" "&gt;" "&amp;"];
# Obsolete - use replaceStrings instead.
replaceChars = builtins.replaceStrings or (
del: new: s:

@ -8,6 +8,7 @@ rec {
platforms = import ./platforms.nix { inherit lib; };
examples = import ./examples.nix { inherit lib; };
architectures = import ./architectures.nix { inherit lib; };
supported = import ./supported.nix { inherit lib; };
# Elaborate a `localSystem` or `crossSystem` so that it contains everything
# necessary.
@ -107,6 +108,7 @@ rec {
else if final.isMips then "mips"
else if final.isPower then "powerpc"
else if final.isRiscV then "riscv"
else if final.isS390 then "s390"
else final.parsed.cpu.name;
qemuArch =

@ -28,7 +28,7 @@ let
"aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7a-linux"
"armv7l-linux" "i686-linux" "m68k-linux" "mipsel-linux"
"powerpc64-linux" "powerpc64le-linux" "riscv32-linux"
"riscv64-linux" "s390-linux" "x86_64-linux"
"riscv64-linux" "s390-linux" "s390x-linux" "x86_64-linux"
# MMIXware
"mmix-mmixware"
@ -41,7 +41,8 @@ let
# none
"aarch64-none" "arm-none" "armv6l-none" "avr-none" "i686-none"
"msp430-none" "or1k-none" "m68k-none" "powerpc-none"
"riscv32-none" "riscv64-none" "s390-none" "vc4-none" "x86_64-none"
"riscv32-none" "riscv64-none" "s390-none" "s390x-none" "vc4-none"
"x86_64-none"
# OpenBSD
"i686-openbsd" "x86_64-openbsd"

@ -152,6 +152,10 @@ rec {
config = "s390-unknown-linux-gnu";
};
s390x = {
config = "s390x-unknown-linux-gnu";
};
arm-embedded = {
config = "arm-none-eabi";
libc = "newlib";

@ -106,6 +106,7 @@ rec {
riscv64 = { bits = 64; significantByte = littleEndian; family = "riscv"; };
s390 = { bits = 32; significantByte = bigEndian; family = "s390"; };
s390x = { bits = 64; significantByte = bigEndian; family = "s390"; };
sparc = { bits = 32; significantByte = bigEndian; family = "sparc"; };
sparc64 = { bits = 64; significantByte = bigEndian; family = "sparc"; };

@ -0,0 +1,25 @@
# Supported systems according to RFC0046's definition.
#
# https://github.com/NixOS/rfcs/blob/master/rfcs/0046-platform-support-tiers.md
{ lib }:
rec {
# List of systems that are built by Hydra.
hydra = tier1 ++ tier2 ++ tier3;
tier1 = [
"x86_64-linux"
];
tier2 = [
"aarch64-linux"
"x86_64-darwin"
];
tier3 = [
"aarch64-darwin"
"armv6l-linux"
"armv7l-linux"
"i686-linux"
"mipsel-linux"
];
}

@ -16,6 +16,10 @@ let
email = lib.mkOption {
type = types.str;
};
matrix = lib.mkOption {
type = types.nullOr types.str;
default = null;
};
github = lib.mkOption {
type = types.nullOr types.str;
default = null;

@ -246,6 +246,11 @@ runTests {
};
};
testEscapeXML = {
expr = escapeXML ''"test" 'test' < & >'';
expected = "&quot;test&quot; &apos;test&apos; &lt; &amp; &gt;";
};
# LISTS
testFilter = {
@ -529,6 +534,25 @@ runTests {
};
};
testToPrettyLimit =
let
a.b = 1;
a.c = a;
in {
expr = generators.toPretty { } (generators.withRecursion { throwOnDepthLimit = false; depthLimit = 2; } a);
expected = "{\n b = 1;\n c = {\n b = \"<unevaluated>\";\n c = {\n b = \"<unevaluated>\";\n c = \"<unevaluated>\";\n };\n };\n}";
};
testToPrettyLimitThrow =
let
a.b = 1;
a.c = a;
in {
expr = (builtins.tryEval
(generators.toPretty { } (generators.withRecursion { depthLimit = 2; } a))).success;
expected = false;
};
testToPrettyMultiline = {
expr = mapAttrs (const (generators.toPretty { })) rec {
list = [ 3 4 [ false ] ];

@ -254,8 +254,10 @@ checkConfigOutput / config.value.path ./types-anything/equal-atoms.nix
checkConfigOutput null config.value.null ./types-anything/equal-atoms.nix
checkConfigOutput 0.1 config.value.float ./types-anything/equal-atoms.nix
# Functions can't be merged together
checkConfigError "The option .* has conflicting definition values" config.value.multiple-lambdas ./types-anything/functions.nix
checkConfigError "The option .value.multiple-lambdas.<function body>. has conflicting option types" config.applied.multiple-lambdas ./types-anything/functions.nix
checkConfigOutput '<LAMBDA>' config.value.single-lambda ./types-anything/functions.nix
checkConfigOutput 'null' config.applied.merging-lambdas.x ./types-anything/functions.nix
checkConfigOutput 'null' config.applied.merging-lambdas.y ./types-anything/functions.nix
# Check that all mk* modifiers are applied
checkConfigError 'attribute .* not found' config.value.mkiffalse ./types-anything/mk-mods.nix
checkConfigOutput '{ }' config.value.mkiftrue ./types-anything/mk-mods.nix

@ -1,16 +1,22 @@
{ lib, ... }: {
{ lib, config, ... }: {
options.value = lib.mkOption {
type = lib.types.anything;
};
options.applied = lib.mkOption {
default = lib.mapAttrs (name: fun: fun null) config.value;
};
config = lib.mkMerge [
{
value.single-lambda = x: x;
value.multiple-lambdas = x: x;
value.multiple-lambdas = x: { inherit x; };
value.merging-lambdas = x: { inherit x; };
}
{
value.multiple-lambdas = x: x;
value.multiple-lambdas = x: [ x ];
value.merging-lambdas = y: { inherit y; };
}
];

@ -28,7 +28,7 @@ with lib.systems.doubles; lib.runTests {
testredox = mseteq redox [ "x86_64-redox" ];
testgnu = mseteq gnu (linux /* ++ kfreebsd ++ ... */);
testillumos = mseteq illumos [ "x86_64-solaris" ];
testlinux = mseteq linux [ "aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7a-linux" "armv7l-linux" "i686-linux" "mipsel-linux" "riscv32-linux" "riscv64-linux" "x86_64-linux" "powerpc64-linux" "powerpc64le-linux" "m68k-linux" "s390-linux" ];
testlinux = mseteq linux [ "aarch64-linux" "armv5tel-linux" "armv6l-linux" "armv7a-linux" "armv7l-linux" "i686-linux" "mipsel-linux" "riscv32-linux" "riscv64-linux" "x86_64-linux" "powerpc64-linux" "powerpc64le-linux" "m68k-linux" "s390-linux" "s390x-linux" ];
testnetbsd = mseteq netbsd [ "aarch64-netbsd" "armv6l-netbsd" "armv7a-netbsd" "armv7l-netbsd" "i686-netbsd" "m68k-netbsd" "mipsel-netbsd" "powerpc-netbsd" "riscv32-netbsd" "riscv64-netbsd" "x86_64-netbsd" ];
testopenbsd = mseteq openbsd [ "i686-openbsd" "x86_64-openbsd" ];
testwindows = mseteq windows [ "i686-cygwin" "x86_64-cygwin" "i686-windows" "x86_64-windows" ];

@ -303,7 +303,26 @@ rec {
# TODO: figure out a clever way to integrate location information from
# something like __unsafeGetAttrPos.
warn = msg: builtins.trace "warning: ${msg}";
/*
Print a warning before returning the second argument. This function behaves
like `builtins.trace`, but requires a string message and formats it as a
warning, including the `warning: ` prefix.
To get a call stack trace and abort evaluation, set the environment variable
`NIX_ABORT_ON_WARN=true` and set the Nix options `--option pure-eval false --show-trace`
Type: string -> a -> a
*/
warn =
if lib.elem (builtins.getEnv "NIX_ABORT_ON_WARN") ["1" "true" "yes"]
then msg: builtins.trace "warning: ${msg}" (abort "NIX_ABORT_ON_WARN=true; warnings are treated as unrecoverable errors.")
else msg: builtins.trace "warning: ${msg}";
/*
Like warn, but only warn when the first argument is `true`.
Type: bool -> string -> a -> a
*/
warnIf = cond: msg: if cond then warn msg else id;
info = msg: builtins.trace "INFO: ${msg}";

@ -192,6 +192,12 @@ rec {
else (listOf anything).merge;
# This is the type of packages, only accept a single definition
stringCoercibleSet = mergeOneOption;
lambda = loc: defs: arg: anything.merge
(loc ++ [ "<function body>" ])
(map (def: {
file = def.file;
value = def.value arg;
}) defs);
# Otherwise fall back to only allowing all equal definitions
}.${commonType} or mergeEqualOption;
in mergeFunction loc defs;

@ -0,0 +1,88 @@
#! /usr/bin/env nix-shell
#! nix-shell -I nixpkgs=. -i bash -p pandoc
# This script is temporarily needed while we transition the manual to
# CommonMark. It converts DocBook files into our CommonMark flavour.
debug=
files=()
while [ "$#" -gt 0 ]; do
i="$1"; shift 1
case "$i" in
--debug)
debug=1
;;
*)
files+=("$i")
;;
esac
done
echo "WARNING: This is an experimental script and might not preserve all formatting." > /dev/stderr
echo "Please report any issues you discover." > /dev/stderr
outExtension="md"
if [[ $debug ]]; then
outExtension="json"
fi
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
# NOTE: Keep in sync with Nixpkgs manual (/doc/Makefile).
# TODO: Remove raw-attribute when we can get rid of DocBook altogether.
pandoc_commonmark_enabled_extensions=+attributes+fenced_divs+footnotes+bracketed_spans+definition_lists+pipe_tables+raw_attribute
targetLang="commonmark${pandoc_commonmark_enabled_extensions}+smart"
if [[ $debug ]]; then
targetLang=json
fi
pandoc_flags=(
# Not needed:
# - diagram-generator.lua (we do not support that in NixOS manual to limit dependencies)
# - media extraction (was only required for diagram generator)
# - myst-reader/roles.lua (only relevant for MyST → DocBook)
# - link-unix-man-references.lua (links should only be added to display output)
# - docbook-writer/rst-roles.lua (only relevant for → DocBook)
# - docbook-writer/labelless-link-is-xref.lua (only relevant for → DocBook)
"--lua-filter=$DIR/../../doc/build-aux/pandoc-filters/docbook-reader/citerefentry-to-rst-role.lua"
"--lua-filter=$DIR/../../doc/build-aux/pandoc-filters/myst-writer/roles.lua"
"--lua-filter=$DIR/doc/unknown-code-language.lua"
-f docbook
-t "$targetLang"
--tab-stop=2
--wrap=none
)
for file in "${files[@]}"; do
if [[ ! -f "$file" ]]; then
echo "db-to-md.sh: $file does not exist" > /dev/stderr
exit 1
else
rootElement=$(xmllint --xpath 'name(//*)' "$file")
if [[ $rootElement = chapter ]]; then
extension=".chapter.$outExtension"
elif [[ $rootElement = section ]]; then
extension=".section.$outExtension"
else
echo "db-to-md.sh: $file contains an unsupported root element $rootElement" > /dev/stderr
exit 1
fi
outFile="${file%".section.xml"}"
outFile="${outFile%".chapter.xml"}"
outFile="${outFile%".xml"}$extension"
temp1=$(mktemp)
$DIR/doc/escape-code-markup.py "$file" "$temp1"
if [[ $debug ]]; then
echo "Converted $file to $temp1" > /dev/stderr
fi
temp2=$(mktemp)
$DIR/doc/replace-xrefs-by-empty-links.py "$temp1" "$temp2"
if [[ $debug ]]; then
echo "Converted $temp1 to $temp2" > /dev/stderr
fi
pandoc "$temp2" -o "$outFile" "${pandoc_flags[@]}"
echo "Converted $file to $outFile" > /dev/stderr
fi
done

@ -0,0 +1,97 @@
#! /usr/bin/env nix-shell
#! nix-shell -I nixpkgs=channel:nixos-unstable -i python3 -p python3 -p python3.pkgs.lxml
"""
Pandoc will strip any markup within code elements so
lets escape them so that they can be handled manually.
"""
import lxml.etree as ET
import re
import sys
def replace_element_by_text(el: ET.Element, text: str) -> None:
"""
Author: bernulf
Source: https://stackoverflow.com/a/10520552/160386
SPDX-License-Identifier: CC-BY-SA-3.0
"""
text = text + (el.tail or "")
parent = el.getparent()
if parent is not None:
previous = el.getprevious()
if previous is not None:
previous.tail = (previous.tail or "") + text
else:
parent.text = (parent.text or "") + text
parent.remove(el)
DOCBOOK_NS = "http://docbook.org/ns/docbook"
# List of elements that pandoc’s DocBook reader strips markup from.
# https://github.com/jgm/pandoc/blob/master/src/Text/Pandoc/Readers/DocBook.hs
code_elements = [
# CodeBlock
"literallayout",
"screen",
"programlisting",
# Code (inline)
"classname",
"code",
"filename",
"envar",
"literal",
"computeroutput",
"prompt",
"parameter",
"option",
"markup",
"wordasword",
"command",
"varname",
"function",
"type",
"symbol",
"constant",
"userinput",
"systemitem",
]
XMLNS_REGEX = re.compile(r'\s+xmlns(?::[^=]+)?="[^"]*"')
ROOT_ELEMENT_REGEX = re.compile(r'^\s*<[^>]+>')
def remove_xmlns(match: re.Match) -> str:
"""
Removes xmlns attributes.
Expects a match containing an opening tag.
"""
return XMLNS_REGEX.sub('', match.group(0))
if __name__ == '__main__':
assert len(sys.argv) >= 3, "usage: escape-code-markup.py <input> <output>"
tree = ET.parse(sys.argv[1])
name_predicate = " or ".join([f"local-name()='{el}'" for el in code_elements])
for markup in tree.xpath(f"//*[({name_predicate}) and namespace-uri()='{DOCBOOK_NS}']/*"):
text = ET.tostring(markup, encoding=str)
# tostring adds xmlns attributes to the element we want to stringify
# as if it was supposed to be usable standalone.
# We are just converting it to CDATA so we do not care.
# Let’s strip the namespace declarations to keep the code clean.
#
# Note that this removes even namespaces that were potentially
# in the original file. Though, that should be very rare –
# most of the time, we will stringify empty DocBook elements
# like <xref> or <co> or, at worst, <link> with xlink:href attribute.
#
# Also note that the regex expects the root element to be first
# thing in the string. But that should be fine, the tostring method
# does not produce XML declaration or doctype by default.
text = ROOT_ELEMENT_REGEX.sub(remove_xmlns, text)
replace_element_by_text(markup, text)
tree.write(sys.argv[2])

@ -0,0 +1,32 @@
#! /usr/bin/env nix-shell
#! nix-shell -I nixpkgs=channel:nixos-unstable -i python3 -p python3 -p python3.pkgs.lxml
"""
Pandoc will try to resolve xrefs and replace them with regular links.
lets replace them with links with empty labels which MyST
and our pandoc filters recognize as cross-references.
"""
import lxml.etree as ET
import sys
XLINK_NS = "http://www.w3.org/1999/xlink"
ns = {
"db": "http://docbook.org/ns/docbook",
}
if __name__ == '__main__':
assert len(sys.argv) >= 3, "usage: replace-xrefs-by-empty-links.py <input> <output>"
tree = ET.parse(sys.argv[1])
for xref in tree.findall(".//db:xref", ns):
text = ET.tostring(xref, encoding=str)
parent = xref.getparent()
link = parent.makeelement('link')
target_name = xref.get("linkend")
link.set(f"{{{XLINK_NS}}}href", f"#{target_name}")
parent.replace(xref, link)
tree.write(sys.argv[2])

@ -0,0 +1,12 @@
--[[
Adds unknown class to CodeBlock AST nodes without any classes.
This will cause Pandoc to use fenced code block, which we prefer.
]]
function CodeBlock(elem)
if #elem.classes == 0 then
elem.classes:insert('unknown')
return elem
end
end

@ -0,0 +1,10 @@
# Nix script to calculate the Haskell dependencies of every haskellPackage. Used by ./hydra-report.hs.
let
pkgs = import ../../.. {};
inherit (pkgs) lib;
getDeps = _: pkg: {
deps = builtins.filter (x: !isNull x) (map (x: x.pname or null) (pkg.propagatedBuildInputs or []));
broken = (pkg.meta.hydraPlatforms or [null]) == [];
};
in
lib.mapAttrs getDeps pkgs.haskellPackages

@ -26,6 +26,8 @@ Because step 1) is quite expensive and takes roughly ~5 minutes the result is ca
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TupleSections #-}
{-# OPTIONS_GHC -Wall #-}
{-# LANGUAGE ViewPatterns #-}
{-# LANGUAGE TupleSections #-}
import Control.Monad (forM_, (<=<))
import Control.Monad.Trans (MonadIO (liftIO))
@ -41,7 +43,7 @@ import Data.List.NonEmpty (NonEmpty, nonEmpty)
import qualified Data.List.NonEmpty as NonEmpty
import Data.Map.Strict (Map)
import qualified Data.Map.Strict as Map
import Data.Maybe (fromMaybe, mapMaybe)
import Data.Maybe (fromMaybe, mapMaybe, isNothing)
import Data.Monoid (Sum (Sum, getSum))
import Data.Sequence (Seq)
import qualified Data.Sequence as Seq
@ -70,6 +72,12 @@ import System.Directory (XdgDirectory (XdgCache), getXdgDirectory)
import System.Environment (getArgs)
import System.Process (readProcess)
import Prelude hiding (id)
import Data.List (sortOn)
import Control.Concurrent.Async (concurrently)
import Control.Exception (evaluate)
import qualified Data.IntMap.Strict as IntMap
import qualified Data.IntSet as IntSet
import Data.Bifunctor (second)
newtype JobsetEvals = JobsetEvals
{ evals :: Seq Eval
@ -134,20 +142,17 @@ hydraEvalCommand = "hydra-eval-jobs"
hydraEvalParams :: [String]
hydraEvalParams = ["-I", ".", "pkgs/top-level/release-haskell.nix"]
handlesCommand :: FilePath
handlesCommand = "nix-instantiate"
handlesParams :: [String]
handlesParams = ["--eval", "--strict", "--json", "-"]
nixExprCommand :: FilePath
nixExprCommand = "nix-instantiate"
handlesExpression :: String
handlesExpression = "with import ./. {}; with lib; zipAttrsWith (_: builtins.head) (mapAttrsToList (_: v: if v ? github then { \"${v.email}\" = v.github; } else {}) (import maintainers/maintainer-list.nix))"
nixExprParams :: [String]
nixExprParams = ["--eval", "--strict", "--json"]
-- | This newtype is used to parse a Hydra job output from @hydra-eval-jobs@.
-- The only field we are interested in is @maintainers@, which is why this
-- is just a newtype.
--
-- Note that there are occassionally jobs that don't have a maintainers
-- Note that there are occasionally jobs that don't have a maintainers
-- field, which is why this has to be @Maybe Text@.
newtype Maintainers = Maintainers { maintainers :: Maybe Text }
deriving stock (Generic, Show)
@ -195,13 +200,49 @@ type EmailToGitHubHandles = Map Text Text
-- @@
type MaintainerMap = Map Text (NonEmpty Text)
-- | Generate a mapping of Hydra job names to maintainer GitHub handles.
-- | Information about a package which lists its dependencies and whether the
-- package is marked broken.
data DepInfo = DepInfo {
deps :: Set Text,
broken :: Bool
}
deriving stock (Generic, Show)
deriving anyclass (FromJSON, ToJSON)
-- | Map from package names to their DepInfo. This is the data we get out of a
-- nix call.
type DependencyMap = Map Text DepInfo
-- | Map from package names to its broken state, number of reverse dependencies (fst) and
-- unbroken reverse dependencies (snd).
type ReverseDependencyMap = Map Text (Int, Int)
-- | Calculate the (unbroken) reverse dependencies of a package by transitively
-- going through all packages if it’s a dependency of them.
calculateReverseDependencies :: DependencyMap -> ReverseDependencyMap
calculateReverseDependencies depMap = Map.fromDistinctAscList $ zip keys (zip (rdepMap False) (rdepMap True))
where
-- This code tries to efficiently invert the dependency map and calculate
-- it’s transitive closure by internally identifying every pkg with it’s index
-- in the package list and then using memoization.
keys = Map.keys depMap
pkgToIndexMap = Map.fromDistinctAscList (zip keys [0..])
intDeps = zip [0..] $ (\DepInfo{broken,deps} -> (broken,mapMaybe (`Map.lookup` pkgToIndexMap) $ Set.toList deps)) <$> Map.elems depMap
rdepMap onlyUnbroken = IntSet.size <$> resultList
where
resultList = go <$> [0..]
oneStepMap = IntMap.fromListWith IntSet.union $ (\(key,(_,deps)) -> (,IntSet.singleton key) <$> deps) <=< filter (\(_, (broken,_)) -> not (broken && onlyUnbroken)) $ intDeps
go pkg = IntSet.unions (oneStep:((resultList !!) <$> IntSet.toList oneStep))
where oneStep = IntMap.findWithDefault mempty pkg oneStepMap
-- | Generate a mapping of Hydra job names to maintainer GitHub handles. Calls
-- hydra-eval-jobs and the nix script ./maintainer-handles.nix.
getMaintainerMap :: IO MaintainerMap
getMaintainerMap = do
hydraJobs :: HydraJobs <-
readJSONProcess hydraEvalCommand hydraEvalParams "" "Failed to decode hydra-eval-jobs output: "
readJSONProcess hydraEvalCommand hydraEvalParams "Failed to decode hydra-eval-jobs output: "
handlesMap :: EmailToGitHubHandles <-
readJSONProcess handlesCommand handlesParams handlesExpression "Failed to decode nix output for lookup of github handles: "
readJSONProcess nixExprCommand ("maintainers/scripts/haskell/maintainer-handles.nix":nixExprParams) "Failed to decode nix output for lookup of github handles: "
pure $ Map.mapMaybe (splitMaintainersToGitHubHandles handlesMap) hydraJobs
where
-- Split a comma-spearated string of Maintainers into a NonEmpty list of
@ -211,6 +252,12 @@ getMaintainerMap = do
splitMaintainersToGitHubHandles handlesMap (Maintainers maint) =
nonEmpty . mapMaybe (`Map.lookup` handlesMap) . Text.splitOn ", " $ fromMaybe "" maint
-- | Get the a map of all dependencies of every package by calling the nix
-- script ./dependencies.nix.
getDependencyMap :: IO DependencyMap
getDependencyMap =
readJSONProcess nixExprCommand ("maintainers/scripts/haskell/dependencies.nix":nixExprParams) "Failed to decode nix output for lookup of dependencies: "
-- | Run a process that produces JSON on stdout and and decode the JSON to a
-- data type.
--
@ -219,11 +266,10 @@ readJSONProcess
:: FromJSON a
=> FilePath -- ^ Filename of executable.
-> [String] -- ^ Arguments
-> String -- ^ stdin to pass to the process
-> String -- ^ String to prefix to JSON-decode error.
-> IO a
readJSONProcess exe args input err = do
output <- readProcess exe args input
readJSONProcess exe args err = do
output <- readProcess exe args ""
let eitherDecodedOutput = eitherDecodeStrict' . encodeUtf8 . Text.pack $ output
case eitherDecodedOutput of
Left decodeErr -> error $ err <> decodeErr <> "\nRaw: '" <> take 1000 output <> "'"
@ -264,7 +310,13 @@ platformIcon (Platform x) = case x of
data BuildResult = BuildResult {state :: BuildState, id :: Int} deriving (Show, Eq, Ord)
newtype Platform = Platform {platform :: Text} deriving (Show, Eq, Ord)
newtype Table row col a = Table (Map (row, col) a)
type StatusSummary = Map Text (Table Text Platform BuildResult, Set Text)
data SummaryEntry = SummaryEntry {
summaryBuilds :: Table Text Platform BuildResult,
summaryMaintainers :: Set Text,
summaryReverseDeps :: Int,
summaryUnbrokenReverseDeps :: Int
}
type StatusSummary = Map Text SummaryEntry
instance (Ord row, Ord col, Semigroup a) => Semigroup (Table row col a) where
Table l <> Table r = Table (Map.unionWith (<>) l r)
@ -275,11 +327,11 @@ instance Functor (Table row col) where
instance Foldable (Table row col) where
foldMap f (Table a) = foldMap f a
buildSummary :: MaintainerMap -> Seq Build -> StatusSummary
buildSummary maintainerMap = foldl (Map.unionWith unionSummary) Map.empty . fmap toSummary
buildSummary :: MaintainerMap -> ReverseDependencyMap -> Seq Build -> StatusSummary
buildSummary maintainerMap reverseDependencyMap = foldl (Map.unionWith unionSummary) Map.empty . fmap toSummary
where
unionSummary (Table l, l') (Table r, r') = (Table $ Map.union l r, l' <> r')
toSummary Build{finished, buildstatus, job, id, system} = Map.singleton name (Table (Map.singleton (set, Platform system) (BuildResult state id)), maintainers)
unionSummary (SummaryEntry (Table lb) lm lr lu) (SummaryEntry (Table rb) rm rr ru) = SummaryEntry (Table $ Map.union lb rb) (lm <> rm) (max lr rr) (max lu ru)
toSummary Build{finished, buildstatus, job, id, system} = Map.singleton name (SummaryEntry (Table (Map.singleton (set, Platform system) (BuildResult state id))) maintainers reverseDeps unbrokenReverseDeps)
where
state :: BuildState
state = case (finished, buildstatus) of
@ -297,6 +349,7 @@ buildSummary maintainerMap = foldl (Map.unionWith unionSummary) Map.empty . fmap
name = maybe packageName NonEmpty.last splitted
set = maybe "" (Text.intercalate "." . NonEmpty.init) splitted
maintainers = maybe mempty (Set.fromList . toList) (Map.lookup job maintainerMap)
(reverseDeps, unbrokenReverseDeps) = Map.findWithDefault (0,0) name reverseDependencyMap
readBuildReports :: IO (Eval, UTCTime, Seq Build)
readBuildReports = do
@ -339,25 +392,29 @@ makeSearchLink evalId linkLabel query = "[" <> linkLabel <> "](" <> "https://hyd
statusToNumSummary :: StatusSummary -> NumSummary
statusToNumSummary = fmap getSum . foldMap (fmap Sum . jobTotals)
jobTotals :: (Table Text Platform BuildResult, a) -> Table Platform BuildState Int
jobTotals (Table mapping, _) = getSum <$> Table (Map.foldMapWithKey (\(_, platform) (BuildResult buildstate _) -> Map.singleton (platform, buildstate) (Sum 1)) mapping)
jobTotals :: SummaryEntry -> Table Platform BuildState Int
jobTotals (summaryBuilds -> Table mapping) = getSum <$> Table (Map.foldMapWithKey (\(_, platform) (BuildResult buildstate _) -> Map.singleton (platform, buildstate) (Sum 1)) mapping)
details :: Text -> [Text] -> [Text]
details summary content = ["<details><summary>" <> summary <> " </summary>", ""] <> content <> ["</details>", ""]
printBuildSummary :: Eval -> UTCTime -> StatusSummary -> Text
printBuildSummary :: Eval -> UTCTime -> StatusSummary -> [(Text, Int)] -> Text
printBuildSummary
Eval{id, jobsetevalinputs = JobsetEvalInputs{nixpkgs = Nixpkgs{revision}}}
fetchTime
summary =
summary
topBrokenRdeps =
Text.unlines $
headline <> totals
headline <> [""] <> tldr <> ((" * "<>) <$> (errors <> warnings)) <> [""]
<> totals
<> optionalList "#### Maintained packages with build failure" (maintainedList fails)
<> optionalList "#### Maintained packages with failed dependency" (maintainedList failedDeps)
<> optionalList "#### Maintained packages with unknown error" (maintainedList unknownErr)
<> optionalHideableList "#### Unmaintained packages with build failure" (unmaintainedList fails)
<> optionalHideableList "#### Unmaintained packages with failed dependency" (unmaintainedList failedDeps)
<> optionalHideableList "#### Unmaintained packages with unknown error" (unmaintainedList unknownErr)
<> optionalHideableList "#### Top 50 broken packages, sorted by number of reverse dependencies" (brokenLine <$> topBrokenRdeps)
<> ["","*:arrow_heading_up:: The number of packages that depend (directly or indirectly) on this package (if any). If two numbers are shown the first (lower) number considers only packages which currently have enabled hydra jobs, i.e. are not marked broken. The second (higher) number considers all packages.*",""]
<> footer
where
footer = ["*Report generated with [maintainers/scripts/haskell/hydra-report.hs](https://github.com/NixOS/nixpkgs/blob/haskell-updates/maintainers/scripts/haskell/hydra-report.sh)*"]
@ -365,7 +422,7 @@ printBuildSummary
[ "#### Build summary"
, ""
]
<> printTable "Platform" (\x -> makeSearchLink id (platform x <> " " <> platformIcon x) ("." <> platform x)) (\x -> showT x <> " " <> icon x) showT (statusToNumSummary summary)
<> printTable "Platform" (\x -> makeSearchLink id (platform x <> " " <> platformIcon x) ("." <> platform x)) (\x -> showT x <> " " <> icon x) showT numSummary
headline =
[ "### [haskell-updates build report from hydra](https://hydra.nixos.org/jobset/nixpkgs/haskell-updates)"
, "*evaluation ["
@ -380,24 +437,49 @@ printBuildSummary
<> Text.pack (formatTime defaultTimeLocale "%Y-%m-%d %H:%M UTC" fetchTime)
<> "*"
]
jobsByState predicate = Map.filter (predicate . foldl' min Success . fmap state . fst) summary
brokenLine (name, rdeps) = "[" <> name <> "](https://packdeps.haskellers.com/reverse/" <> name <> ") :arrow_heading_up: " <> Text.pack (show rdeps) <> " "
numSummary = statusToNumSummary summary
jobsByState predicate = Map.filter (predicate . worstState) summary
worstState = foldl' min Success . fmap state . summaryBuilds
fails = jobsByState (== Failed)
failedDeps = jobsByState (== DependencyFailed)
unknownErr = jobsByState (\x -> x > DependencyFailed && x < TimedOut)
withMaintainer = Map.mapMaybe (\(x, m) -> (x,) <$> nonEmpty (Set.toList m))
withoutMaintainer = Map.mapMaybe (\(x, m) -> if Set.null m then Just x else Nothing)
withMaintainer = Map.mapMaybe (\e -> (summaryBuilds e,) <$> nonEmpty (Set.toList (summaryMaintainers e)))
withoutMaintainer = Map.mapMaybe (\e -> if Set.null (summaryMaintainers e) then Just e else Nothing)
optionalList heading list = if null list then mempty else [heading] <> list
optionalHideableList heading list = if null list then mempty else [heading] <> details (showT (length list) <> " job(s)") list
maintainedList = showMaintainedBuild <=< Map.toList . withMaintainer
unmaintainedList = showBuild <=< Map.toList . withoutMaintainer
showBuild (name, table) = printJob id name (table, "")
unmaintainedList = showBuild <=< sortOn (\(snd -> x) -> (negate (summaryUnbrokenReverseDeps x), negate (summaryReverseDeps x))) . Map.toList . withoutMaintainer
showBuild (name, entry) = printJob id name (summaryBuilds entry, Text.pack (if summaryReverseDeps entry > 0 then " :arrow_heading_up: " <> show (summaryUnbrokenReverseDeps entry) <>" | "<> show (summaryReverseDeps entry) else ""))
showMaintainedBuild (name, (table, maintainers)) = printJob id name (table, Text.intercalate " " (fmap ("@" <>) (toList maintainers)))
tldr = case (errors, warnings) of
([],[]) -> [":green_circle: **Ready to merge**"]
([],_) -> [":yellow_circle: **Potential issues**"]
_ -> [":red_circle: **Branch not mergeable**"]
warnings =
if' (Unfinished > maybe Success worstState maintainedJob) "`maintained` jobset failed." <>
if' (Unfinished == maybe Success worstState mergeableJob) "`mergeable` jobset is not finished." <>
if' (Unfinished == maybe Success worstState maintainedJob) "`maintained` jobset is not finished."
errors =
if' (isNothing mergeableJob) "No `mergeable` job found." <>
if' (isNothing maintainedJob) "No `maintained` job found." <>
if' (Unfinished > maybe Success worstState mergeableJob) "`mergeable` jobset failed." <>
if' (outstandingJobs (Platform "x86_64-linux") > 100) "Too many outstanding jobs on x86_64-linux." <>
if' (outstandingJobs (Platform "aarch64-linux") > 100) "Too many outstanding jobs on aarch64-linux."
if' p e = if p then [e] else mempty
outstandingJobs platform | Table m <- numSummary = Map.findWithDefault 0 (platform, Unfinished) m
maintainedJob = Map.lookup "maintained" summary
mergeableJob = Map.lookup "mergeable" summary
printMaintainerPing :: IO ()
printMaintainerPing = do
maintainerMap <- getMaintainerMap
(maintainerMap, (reverseDependencyMap, topBrokenRdeps)) <- concurrently getMaintainerMap do
depMap <- getDependencyMap
rdepMap <- evaluate . calculateReverseDependencies $ depMap
let tops = take 50 . sortOn (negate . snd) . fmap (second fst) . filter (\x -> maybe False broken $ Map.lookup (fst x) depMap) . Map.toList $ rdepMap
pure (rdepMap, tops)
(eval, fetchTime, buildReport) <- readBuildReports
putStrLn (Text.unpack (printBuildSummary eval fetchTime (buildSummary maintainerMap buildReport)))
putStrLn (Text.unpack (printBuildSummary eval fetchTime (buildSummary maintainerMap reverseDependencyMap buildReport) topBrokenRdeps))
printMarkBrokenList :: IO ()
printMarkBrokenList = do

@ -0,0 +1,7 @@
# Nix script to lookup maintainer github handles from their email address. Used by ./hydra-report.hs.
let
pkgs = import ../../.. {};
maintainers = import ../../maintainer-list.nix;
inherit (pkgs) lib;
mkMailGithubPair = _: maintainer: if maintainer ? github then { "${maintainer.email}" = maintainer.github; } else {};
in lib.zipAttrsWith (_: builtins.head) (lib.mapAttrsToList mkMailGithubPair maintainers)

@ -0,0 +1,122 @@
#! /usr/bin/env nix-shell
#! nix-shell -i bash -p git gh -I nixpkgs=.
#
# Script to merge the currently open haskell-updates PR into master, bump the
# Stackage version and Hackage versions, and open the next haskell-updates PR.
set -eu -o pipefail
# exit after printing first argument to this function
function die {
# echo the first argument
echo "ERROR: $1"
echo "Aborting!"
exit 1
}
function help {
echo "Usage: $0 HASKELL_UPDATES_PR_NUM"
echo "Merge the currently open haskell-updates PR into master, and open the next one."
echo
echo " -h, --help print this help"
echo " HASKELL_UPDATES_PR_NUM number of the currently open PR on NixOS/nixpkgs"
echo " for the haskell-updates branch"
echo
echo "Example:"
echo " \$ $0 137340"
exit 1
}
# Read in the current haskell-updates PR number from the command line.
while [[ $# -gt 0 ]]; do
key="$1"
case $key in
-h|--help)
help
;;
*)
curr_haskell_updates_pr_num="$1"
shift
;;
esac
done
if [[ -z "${curr_haskell_updates_pr_num-}" ]] ; then
die "You must pass the current haskell-updates PR number as the first argument to this script."
fi
# Make sure you have gh authentication setup.
if ! gh auth status 2>/dev/null ; then
die "You must setup the \`gh\` command. Run \`gh auth login\`."
fi
# Fetch nixpkgs to get an up-to-date origin/haskell-updates branch.
echo "Fetching origin..."
git fetch origin >/dev/null
# Make sure we are currently on a local haskell-updates branch.
curr_branch="$(git rev-parse --abbrev-ref HEAD)"
if [[ "$curr_branch" != "haskell-updates" ]]; then
die "Current branch is not called \"haskell-updates\"."
fi
# Make sure our local haskell-updates branch is on the same commit as
# origin/haskell-updates.
curr_branch_commit="$(git rev-parse haskell-updates)"
origin_haskell_updates_commit="$(git rev-parse origin/haskell-updates)"
if [[ "$curr_branch_commit" != "$origin_haskell_updates_commit" ]]; then
die "Current branch is not at the same commit as origin/haskell-updates"
fi
# Merge the current open haskell-updates PR.
echo "Merging https://github.com/NixOS/nixpkgs/pull/${curr_haskell_updates_pr_num}..."
gh pr merge --repo NixOS/nixpkgs --merge "$curr_haskell_updates_pr_num"
# Update the list of Haskell package versions in NixOS on Hackage.
echo "Updating list of Haskell package versions in NixOS on Hackage..."
./maintainers/scripts/haskell/upload-nixos-package-list-to-hackage.sh
# Update stackage, Hackage hashes, and regenerate Haskell package set
echo "Updating Stackage..."
./maintainers/scripts/haskell/update-stackage.sh --do-commit
echo "Updating Hackage hashes..."
./maintainers/scripts/haskell/update-hackage.sh --do-commit
echo "Regenerating Hackage packages..."
./maintainers/scripts/haskell/regenerate-hackage-packages.sh --do-commit
# Push these new commits to the haskell-updates branch
echo "Pushing commits just created to the remote haskell-updates branch..."
git push
# Open new PR
new_pr_body=$(cat <<EOF
### This Merge
This PR is the regular merge of the \`haskell-updates\` branch into \`master\`.
This branch is being continually built and tested by hydra at https://hydra.nixos.org/jobset/nixpkgs/haskell-updates. You may be able to find an up-to-date Hydra build report at [cdepillabout/nix-haskell-updates-status](https://github.com/cdepillabout/nix-haskell-updates-status).
We roughly aim to merge these \`haskell-updates\` PRs at least once every two weeks. See the @NixOS/haskell [team calendar](https://cloud.maralorn.de/apps/calendar/p/Mw5WLnzsP7fC4Zky) for who is currently in charge of this branch.
### haskellPackages Workflow Summary
Our workflow is currently described in [\`pkgs/development/haskell-modules/HACKING.md\`](https://github.com/NixOS/nixpkgs/blob/haskell-updates/pkgs/development/haskell-modules/HACKING.md).
The short version is this:
* We regularly update the Stackage and Hackage pins on \`haskell-updates\` (normally at the beginning of a merge window).
* The community fixes builds of Haskell packages on that branch.
* We aim at at least one merge of \`haskell-updates\` into \`master\` every two weeks.
* We only do the merge if the [\`mergeable\`](https://hydra.nixos.org/job/nixpkgs/haskell-updates/mergeable) job is succeeding on hydra.
* If a [\`maintained\`](https://hydra.nixos.org/job/nixpkgs/haskell-updates/maintained) package is still broken at the time of merge, we will only merge if the maintainer has been pinged 7 days in advance. (If you care about a Haskell package, become a maintainer!)
---
This is the follow-up to #${curr_haskell_updates_pr_num}. Come to [#haskell:nixos.org](https://matrix.to/#/#haskell:nixos.org) if you have any questions.
EOF
)
echo "Opening a PR for the next haskell-updates merge cycle..."
gh pr create --repo NixOS/nixpkgs --base master --head haskell-updates --title "haskellPackages: update stackage and hackage" --body "$new_pr_body"

@ -19,3 +19,4 @@ package_list="$(nix-build -A haskell.package-list)/nixos-hackage-packages.csv"
username=$(grep "^username:" ~/.cabal/config | sed "s/^username: //")
password_command=$(grep "^password-command:" ~/.cabal/config | sed "s/^password-command: //")
curl -u "$username:$($password_command | head -n1)" --digest -H "Content-type: text/csv" -T "$package_list" http://hackage.haskell.org/distro/NixOS/packages.csv
echo

@ -1,89 +1,86 @@
name,server,version,luaversion,maintainers
alt-getopt,,,,arobyn
ansicolors,,,,
bit32,,5.3.0-1,lua5_1,lblasc
argparse,,,,
basexx,,,,
binaryheap,,,,vcunat
busted,,,,
cassowary,,,,marsam alerque
compat53,,0.7-1,,vcunat
cosmo,,,,marsam
coxpcall,,1.17.0-1,,
cqueues,,,,vcunat
cyrussasl,,,,
digestif,,0.2-1,lua5_3,
dkjson,,,,
fifo,,,,
gitsigns.nvim,,,lua5_1,
http,,0.3-0,,vcunat
inspect,,,,
ldbus,http://luarocks.org/dev,,,
ldoc,,,,
lgi,,,,
linenoise,,,,
ljsyscall,,,lua5_1,lblasc
lpeg,,,,vyp
lpeg_patterns,,,,
lpeglabel,,,,
lpty,,,,
lrexlib-gnu,,,,
lrexlib-pcre,,,,vyp
lrexlib-posix,,,,
ltermbox,,,,
lua-cjson,,,,
lua-cmsgpack,,,,
lua-iconv,,,,
lua-lsp,http://luarocks.org/dev,,,
lua-messagepack,,,,
lua-resty-http,,,,
lua-resty-jwt,,,,
lua-resty-openidc,,,,
lua-resty-openssl,,,,
lua-resty-session,,,,
lua-term,,,,
lua-toml,,,,
lua-zlib,,,,koral
lua_cliargs,,,,
luabitop,,,,
luacheck,,,,
luacov,,,,
luadbi,,,,
luadbi-mysql,,,,
luadbi-postgresql,,,,
luadbi-sqlite3,,,,
luadoc,,,,
luaepnf,,,,
luaevent,,,,
luaexpat,,1.3.0-1,,arobyn flosse
luaffi,http://luarocks.org/dev,,,
luafilesystem,,1.7.0-2,,flosse
lualogging,,,,
luaossl,,,lua5_1,
luaposix,,34.1.1-1,,vyp lblasc
luarepl,,,,
luasec,,,,flosse
luasocket,,,,
luasql-sqlite3,,,,vyp
luassert,,,,
luasystem,,,,
luautf8,,,,pstn
luazip,,,,
lua-yajl,,,,pstn
luuid,,,,
luv,,1.30.0-0,,
lyaml,,,,lblasc
markdown,,,,
mediator_lua,,,,
mpack,,,,
moonscript,,,,arobyn
nvim-client,,,,
penlight,,,,
plenary.nvim,,,lua5_1,
rapidjson,,,,
readline,,,,
say,,,,
std._debug,,,,
std.normalize,,,,
stdlib,,,,vyp
vstruct,,,,
name,src,ref,server,version,luaversion,maintainers
alt-getopt,,,,,,arobyn
bit32,,,,5.3.0-1,lua5_1,lblasc
argparse,https://github.com/luarocks/argparse.git,,,,,
basexx,https://github.com/teto/basexx.git,,,,,
binaryheap,https://github.com/Tieske/binaryheap.lua,,,,,vcunat
busted,,,,,,
cassowary,,,,,,marsam alerque
compat53,,,,0.7-1,,vcunat
cosmo,,,,,,marsam
coxpcall,,,,1.17.0-1,,
cqueues,,,,,,vcunat
cyrussasl,https://github.com/JorjBauer/lua-cyrussasl.git,,,,,
digestif,https://github.com/astoff/digestif.git,,,0.2-1,lua5_3,
dkjson,,,,,,
fifo,,,,,,
gitsigns.nvim,https://github.com/lewis6991/gitsigns.nvim.git,,,,lua5_1,
http,,,,0.3-0,,vcunat
inspect,,,,,,
ldbus,,,http://luarocks.org/dev,,,
ldoc,https://github.com/stevedonovan/LDoc.git,,,,,
lgi,,,,,,
linenoise,https://github.com/hoelzro/lua-linenoise.git,,,,,
ljsyscall,,,,,lua5_1,lblasc
lpeg,,,,,,vyp
lpeg_patterns,,,,,,
lpeglabel,,,,,,
lpty,,,,,,
lrexlib-gnu,,,,,,
lrexlib-pcre,,,,,,vyp
lrexlib-posix,,,,,,
lua-cjson,,,,,,
lua-cmsgpack,,,,,,
lua-iconv,,,,,,
lua-lsp,,,,,,
lua-messagepack,,,,,,
lua-resty-http,,,,,,
lua-resty-jwt,,,,,,
lua-resty-openidc,,,,,,
lua-resty-openssl,,,,,,
lua-resty-session,,,,,,
lua-term,,,,,,
lua-toml,,,,,,
lua-zlib,,,,,,koral
lua_cliargs,https://github.com/amireh/lua_cliargs.git,,,,,
luabitop,https://github.com/teto/luabitop.git,,,,,
luacheck,,,,,,
luacov,,,,,,
luadbi,,,,,,
luadbi-mysql,,,,,,
luadbi-postgresql,,,,,,
luadbi-sqlite3,,,,,,
luaepnf,,,,,,
luaevent,,,,,,
luaexpat,,,,1.3.0-1,,arobyn flosse
luaffi,,,http://luarocks.org/dev,,,
luafilesystem,,,,1.7.0-2,,flosse
lualogging,,,,,,
luaossl,,,,,lua5_1,
luaposix,,,,34.1.1-1,,vyp lblasc
luarepl,,,,,,
luasec,,,,,,flosse
luasocket,,,,,,
luasql-sqlite3,,,,,,vyp
luassert,,,,,,
luasystem,,,,,,
luautf8,,,,,,pstn
luazip,,,,,,
lua-yajl,,,,,,pstn
luuid,,,,,,
luv,,,,1.42.0-0,,
lyaml,,,,,,lblasc
markdown,,,,,,
mediator_lua,,,,,,
mpack,,,,,,
moonscript,,,,,,arobyn
nvim-client,https://github.com/neovim/lua-client.git,,,,,
penlight,https://github.com/lunarmodules/Penlight.git,,,,,alerque
plenary.nvim,https://github.com/nvim-lua/plenary.nvim.git,,,,lua5_1,
rapidjson,https://github.com/xpol/lua-rapidjson.git,,,,,
readline,,,,,,
say,https://github.com/Olivine-Labs/say.git,,,,,
std._debug,https://github.com/lua-stdlib/_debug.git,,,,,
std.normalize,git://github.com/lua-stdlib/normalize.git,,,,,
stdlib,,,,41.2.2,,vyp
vstruct,https://github.com/ToxicFrog/vstruct.git,,,,,

1 name src ref server version luaversion maintainers
2 alt-getopt arobyn
3 ansicolors bit32 5.3.0-1 lua5_1 lblasc
4 bit32 argparse https://github.com/luarocks/argparse.git 5.3.0-1 lua5_1 lblasc
5 argparse basexx https://github.com/teto/basexx.git
6 basexx binaryheap https://github.com/Tieske/binaryheap.lua vcunat
7 binaryheap busted vcunat
8 busted cassowary marsam alerque
9 cassowary compat53 0.7-1 marsam alerque vcunat
10 compat53 cosmo 0.7-1 vcunat marsam
11 cosmo coxpcall 1.17.0-1 marsam
12 coxpcall cqueues 1.17.0-1 vcunat
13 cqueues cyrussasl https://github.com/JorjBauer/lua-cyrussasl.git vcunat
14 cyrussasl digestif https://github.com/astoff/digestif.git 0.2-1 lua5_3
15 digestif dkjson 0.2-1 lua5_3
16 dkjson fifo
17 fifo gitsigns.nvim https://github.com/lewis6991/gitsigns.nvim.git lua5_1
18 gitsigns.nvim http 0.3-0 lua5_1 vcunat
19 http inspect 0.3-0 vcunat
20 inspect ldbus http://luarocks.org/dev
21 ldbus ldoc https://github.com/stevedonovan/LDoc.git http://luarocks.org/dev
22 ldoc lgi
23 lgi linenoise https://github.com/hoelzro/lua-linenoise.git
24 linenoise ljsyscall lua5_1 lblasc
25 ljsyscall lpeg lua5_1 lblasc vyp
26 lpeg lpeg_patterns vyp
27 lpeg_patterns lpeglabel
28 lpeglabel lpty
29 lpty lrexlib-gnu
30 lrexlib-gnu lrexlib-pcre vyp
31 lrexlib-pcre lrexlib-posix vyp
32 lrexlib-posix lua-cjson
33 ltermbox lua-cmsgpack
34 lua-cjson lua-iconv
35 lua-cmsgpack lua-lsp
36 lua-iconv lua-messagepack
37 lua-lsp lua-resty-http http://luarocks.org/dev
38 lua-messagepack lua-resty-jwt
39 lua-resty-http lua-resty-openidc
40 lua-resty-jwt lua-resty-openssl
41 lua-resty-openidc lua-resty-session
42 lua-resty-openssl lua-term
43 lua-resty-session lua-toml
44 lua-term lua-zlib koral
45 lua-toml lua_cliargs https://github.com/amireh/lua_cliargs.git
46 lua-zlib luabitop https://github.com/teto/luabitop.git koral
47 lua_cliargs luacheck
48 luabitop luacov
49 luacheck luadbi
50 luacov luadbi-mysql
51 luadbi luadbi-postgresql
52 luadbi-mysql luadbi-sqlite3
53 luadbi-postgresql luaepnf
54 luadbi-sqlite3 luaevent
55 luadoc luaexpat 1.3.0-1 arobyn flosse
56 luaepnf luaffi http://luarocks.org/dev
57 luaevent luafilesystem 1.7.0-2 flosse
58 luaexpat lualogging 1.3.0-1 arobyn flosse
59 luaffi luaossl http://luarocks.org/dev lua5_1
60 luafilesystem luaposix 1.7.0-2 34.1.1-1 flosse vyp lblasc
61 lualogging luarepl
62 luaossl luasec lua5_1 flosse
63 luaposix luasocket 34.1.1-1 vyp lblasc
64 luarepl luasql-sqlite3 vyp
65 luasec luassert flosse
66 luasocket luasystem
67 luasql-sqlite3 luautf8 vyp pstn
68 luassert luazip
69 luasystem lua-yajl pstn
70 luautf8 luuid pstn
71 luazip luv 1.42.0-0
72 lua-yajl lyaml pstn lblasc
73 luuid markdown
74 luv mediator_lua 1.30.0-0
75 lyaml mpack lblasc
76 markdown moonscript arobyn
77 mediator_lua nvim-client https://github.com/neovim/lua-client.git
78 mpack penlight https://github.com/lunarmodules/Penlight.git alerque
79 moonscript plenary.nvim https://github.com/nvim-lua/plenary.nvim.git lua5_1 arobyn
80 nvim-client rapidjson https://github.com/xpol/lua-rapidjson.git
81 penlight readline
82 plenary.nvim say https://github.com/Olivine-Labs/say.git lua5_1
83 rapidjson std._debug https://github.com/lua-stdlib/_debug.git
84 readline std.normalize git://github.com/lua-stdlib/normalize.git
85 say stdlib 41.2.2 vyp
86 std._debug vstruct https://github.com/ToxicFrog/vstruct.git
std.normalize
stdlib vyp
vstruct

@ -547,7 +547,6 @@ def update_plugins(editor: Editor, args):
log.setLevel(LOG_LEVELS[args.debug])
log.info("Start updating plugins")
nixpkgs_repo = git.Repo(editor.root, search_parent_directories=True)
update = editor.get_update(args.input_file, args.outfile, args.proc)
redirects = update()
@ -556,6 +555,7 @@ def update_plugins(editor: Editor, args):
autocommit = not args.no_commit
if autocommit:
nixpkgs_repo = git.Repo(editor.root, search_parent_directories=True)
commit(nixpkgs_repo, f"{editor.attr_path}: update", [args.outfile])
if redirects:

@ -4,123 +4,130 @@ set -e
# --print: avoid dependency on environment
optPrint=
if [ "$1" == "--print" ]; then
optPrint=true
shift
optPrint=true
shift
fi
if [ "$#" != 1 ] && [ "$#" != 2 ]; then
cat <<-EOF
Usage: $0 [--print] commit-spec [commit-spec]
You need to be in a git-controlled nixpkgs tree.
The current state of the tree will be used if the second commit is missing.
EOF
exit 1
cat <<EOF
Usage: $0 [--print] from-commit-spec [to-commit-spec]
You need to be in a git-controlled nixpkgs tree.
The current state of the tree will be used if the second commit is missing.
Examples:
effect of latest commit:
$ $0 HEAD^
$ $0 --print HEAD^
effect of the whole patch series for 'staging' branch:
$ $0 origin/staging staging
EOF
exit 1
fi
# A slightly hacky way to get the config.
parallel="$(echo 'config.rebuild-amount.parallel or false' | nix-repl . 2>/dev/null \
| grep -v '^\(nix-repl.*\)\?$' | tail -n 1 || true)"
| grep -v '^\(nix-repl.*\)\?$' | tail -n 1 || true)"
echo "Estimating rebuild amount by counting changed Hydra jobs."
echo "Estimating rebuild amount by counting changed Hydra jobs (parallel=${parallel:-unset})."
toRemove=()
cleanup() {
rm -rf "${toRemove[@]}"
rm -rf "${toRemove[@]}"
}
trap cleanup EXIT SIGINT SIGQUIT ERR
MKTEMP='mktemp --tmpdir nix-rebuild-amount-XXXXXXXX'
nixexpr() {
cat <<-EONIX
let
lib = import $1/lib;
hydraJobs = import $1/pkgs/top-level/release.nix
# Compromise: accuracy vs. resources needed for evaluation.
{ supportedSystems = cfg.systems or [ "x86_64-linux" "x86_64-darwin" ]; };
cfg = (import $1 {}).config.rebuild-amount or {};
recurseIntoAttrs = attrs: attrs // { recurseForDerivations = true; };
# hydraJobs leaves recurseForDerivations as empty attrmaps;
# that would break nix-env and we also need to recurse everywhere.
tweak = lib.mapAttrs
(name: val:
if name == "recurseForDerivations" then true
else if lib.isAttrs val && val.type or null != "derivation"
then recurseIntoAttrs (tweak val)
else val
);
# Some of these contain explicit references to platform(s) we want to avoid;
# some even (transitively) depend on ~/.nixpkgs/config.nix (!)
blacklist = [
"tarball" "metrics" "manual"
"darwin-tested" "unstable" "stdenvBootstrapTools"
"moduleSystem" "lib-tests" # these just confuse the output
];
in
tweak (builtins.removeAttrs hydraJobs blacklist)
EONIX
cat <<EONIX
let
lib = import $1/lib;
hydraJobs = import $1/pkgs/top-level/release.nix
# Compromise: accuracy vs. resources needed for evaluation.
{ supportedSystems = cfg.systems or [ "x86_64-linux" "x86_64-darwin" ]; };
cfg = (import $1 {}).config.rebuild-amount or {};
recurseIntoAttrs = attrs: attrs // { recurseForDerivations = true; };
# hydraJobs leaves recurseForDerivations as empty attrmaps;
# that would break nix-env and we also need to recurse everywhere.
tweak = lib.mapAttrs
(name: val:
if name == "recurseForDerivations" then true
else if lib.isAttrs val && val.type or null != "derivation"
then recurseIntoAttrs (tweak val)
else val
);
# Some of these contain explicit references to platform(s) we want to avoid;
# some even (transitively) depend on ~/.nixpkgs/config.nix (!)
blacklist = [
"tarball" "metrics" "manual"
"darwin-tested" "unstable" "stdenvBootstrapTools"
"moduleSystem" "lib-tests" # these just confuse the output
];
in
tweak (builtins.removeAttrs hydraJobs blacklist)
EONIX
}
# Output packages in tree $2 that weren't in $1.
# Changing the output hash or name is taken as a change.
# Extra nix-env parameters can be in $3
newPkgs() {
# We use files instead of pipes, as running multiple nix-env processes
# could eat too much memory for a standard 4GiB machine.
local -a list
for i in 1 2; do
local l="$($MKTEMP)"
list[$i]="$l"
toRemove+=("$l")
local expr="$($MKTEMP)"
toRemove+=("$expr")
nixexpr "${!i}" > "$expr"
nix-env -f "$expr" -qaP --no-name --out-path --show-trace $3 \
| sort > "${list[$i]}" &
if [ "$parallel" != "true" ]; then
wait
fi
done
wait
comm -13 "${list[@]}"
# We use files instead of pipes, as running multiple nix-env processes
# could eat too much memory for a standard 4GiB machine.
local -a list
for i in 1 2; do
local l="$($MKTEMP)"
list[$i]="$l"
toRemove+=("$l")
local expr="$($MKTEMP)"
toRemove+=("$expr")
nixexpr "${!i}" > "$expr"
nix-env -f "$expr" -qaP --no-name --out-path --show-trace $3 \
| sort > "${list[$i]}" &
if [ "$parallel" != "true" ]; then
wait
fi
done
wait
comm -13 "${list[@]}"
}
# Prepare nixpkgs trees.
declare -a tree
for i in 1 2; do
if [ -n "${!i}" ]; then # use the given commit
dir="$($MKTEMP -d)"
tree[$i]="$dir"
toRemove+=("$dir")
git clone --shared --no-checkout --quiet . "${tree[$i]}"
(cd "${tree[$i]}" && git checkout --quiet "${!i}")
else #use the current tree
tree[$i]="$(pwd)"
fi
if [ -n "${!i}" ]; then # use the given commit
dir="$($MKTEMP -d)"
tree[$i]="$dir"
toRemove+=("$dir")
git clone --shared --no-checkout --quiet . "${tree[$i]}"
(cd "${tree[$i]}" && git checkout --quiet "${!i}")
else #use the current tree
tree[$i]="$(pwd)"
fi
done
newlist="$($MKTEMP)"
toRemove+=("$newlist")
# Notes:
# - the evaluation is done on x86_64-linux, like on Hydra.
# - using $newlist file so that newPkgs() isn't in a sub-shell (because of toRemove)
# - the evaluation is done on x86_64-linux, like on Hydra.
# - using $newlist file so that newPkgs() isn't in a sub-shell (because of toRemove)
newPkgs "${tree[1]}" "${tree[2]}" '--argstr system "x86_64-linux"' > "$newlist"
# Hacky: keep only the last word of each attribute path and sort.
sed -n 's/\([^. ]*\.\)*\([^. ]*\) .*$/\2/p' < "$newlist" \
| sort | uniq -c
| sort | uniq -c
if [ -n "$optPrint" ]; then
echo
cat "$newlist"
echo
cat "$newlist"
fi

@ -1,5 +1,5 @@
#!/usr/bin/env nix-shell
#!nix-shell -p nix-prefetch-git luarocks-nix python3 python3Packages.GitPython nix -i python3
#!nix-shell update-luarocks-shell.nix -i python3
# format:
# $ nix run nixpkgs.python3Packages.black -c black update.py
@ -19,7 +19,7 @@ import logging
import textwrap
from multiprocessing.dummy import Pool
from typing import List, Tuple
from typing import List, Tuple, Optional
from pathlib import Path
log = logging.getLogger()
@ -33,8 +33,7 @@ TMP_FILE="$(mktemp)"
GENERATED_NIXFILE="pkgs/development/lua-modules/generated-packages.nix"
LUAROCKS_CONFIG="$NIXPKGS_PATH/maintainers/scripts/luarocks-config.lua"
HEADER = """
/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
HEADER = """/* {GENERATED_NIXFILE} is an auto-generated file -- DO NOT EDIT!
Regenerate it with:
nixpkgs$ ./maintainers/scripts/update-luarocks-packages
@ -50,10 +49,21 @@ FOOTER="""
@dataclass
class LuaPlugin:
name: str
version: str
server: str
luaversion: str
maintainers: str
'''Name of the plugin, as seen on luarocks.org'''
src: str
'''address to the git repository'''
ref: Optional[str]
'''git reference (branch name/tag)'''
version: Optional[str]
'''Set it to pin a package '''
server: Optional[str]
'''luarocks.org registers packages under different manifests.
Its value can be 'http://luarocks.org/dev'
'''
luaversion: Optional[str]
'''Attribue of the lua interpreter if a package is available only for a specific lua version'''
maintainers: Optional[str]
''' Optional string listing maintainers separated by spaces'''
@property
def normalized_name(self) -> str:
@ -88,9 +98,8 @@ class LuaEditor(Editor):
header2 = textwrap.dedent(
# header2 = inspect.cleandoc(
"""
{ self, stdenv, lib, fetchurl, fetchgit, ... } @ args:
self: super:
with self;
{ self, stdenv, lib, fetchurl, fetchgit, callPackage, ... } @ args:
final: prev:
{
""")
f.write(header2)
@ -149,16 +158,33 @@ def generate_pkg_nix(plug: LuaPlugin):
Our cache key associates "p.name-p.version" to its rockspec
'''
log.debug("Generating nix expression for %s", plug.name)
cmd = [ "luarocks", "nix", plug.name]
cmd = [ "luarocks", "nix"]
if plug.server:
cmd.append(f"--only-server={plug.server}")
if plug.maintainers:
cmd.append(f"--maintainers={plug.maintainers}")
if plug.version:
cmd.append(plug.version)
# updates plugin directly from its repository
print("server: [%s]" % plug.server)
# if plug.server == "src":
if plug.src != "":
if plug.src is None:
msg = "src must be set when 'version' is set to \"src\" for package %s" % plug.name
log.error(msg)
raise RuntimeError(msg)
log.debug("Updating from source %s", plug.src)
cmd.append(plug.src)
# update the plugin from luarocks
else:
cmd.append(plug.name)
if plug.version and plug.version != "src":
cmd.append(plug.version)
#
if plug.server != "src" and plug.server:
cmd.append(f"--only-server={plug.server}")
if plug.luaversion:
with CleanEnvironment():
@ -169,8 +195,9 @@ def generate_pkg_nix(plug: LuaPlugin):
lua_drv_path=subprocess.check_output(cmd2, text=True).strip()
cmd.append(f"--lua-dir={lua_drv_path}/bin")
log.debug("running %s", cmd)
log.debug("running %s", ' '.join(cmd))
output = subprocess.check_output(cmd, text=True)
output = "callPackage(" + output.strip() + ") {};\n\n"
return (plug, output)
def main():
@ -191,3 +218,4 @@ if __name__ == "__main__":
main()
# vim: set ft=python noet fdm=manual fenc=utf-8 ff=unix sts=0 sw=4 ts=4 :

@ -1,12 +1,13 @@
{ nixpkgs ? import ../.. { }
}:
with nixpkgs;
let
pyEnv = python3.withPackages(ps: [ ps.GitPython ]);
in
mkShell {
packages = [
bash
pyEnv
luarocks-nix
nix-prefetch-scripts
parallel
];
LUAROCKS_NIXPKGS_PATH = toString nixpkgs.path;
}

@ -103,7 +103,7 @@ let
pathContent = lib.attrByPath prefix null pkgs;
in
if pathContent == null then
builtins.throw "Attribute path `${path}` does not exists."
builtins.throw "Attribute path `${path}` does not exist."
else
packagesWithPath prefix (path: pkg: builtins.hasAttr "updateScript" pkg)
pathContent;
@ -115,7 +115,7 @@ let
package = lib.attrByPath (lib.splitString "." path) null pkgs;
in
if package == null then
builtins.throw "Package with an attribute name `${path}` does not exists."
builtins.throw "Package with an attribute name `${path}` does not exist."
else if ! builtins.hasAttr "updateScript" package then
builtins.throw "Package with an attribute name `${path}` does not have a `passthru.updateScript` attribute defined."
else

@ -137,7 +137,7 @@ with lib.maintainers; {
cleverca22
disassembler
jonringer
maveru
manveru
nrdxp
];
scope = "Input-Output Global employees, which maintain critical software";
@ -164,6 +164,24 @@ with lib.maintainers; {
scope = "Maintain Kodi and related packages.";
};
linux-kernel = {
members = [
TredwellGit
ma27
nequissimus
qyliss
];
scope = "Maintain the Linux kernel.";
};
mate = {
members = [
j03
romildo
];
scope = "Maintain Mate desktop environment and related packages.";
};
matrix = {
members = [
ma27
@ -178,6 +196,15 @@ with lib.maintainers; {
scope = "Maintain the ecosystem around Matrix, a decentralized messenger.";
};
openstack = {
members = [
angustrau
superherointj
SuperSandro2000
];
scope = "Maintain the ecosystem around OpenStack";
};
pantheon = {
members = [
davidak

@ -16,7 +16,7 @@ If NixOS fails to boot, there are a number of kernel command line parameters tha
`boot.debug1mounts`
: Like `boot.debug1` or `boot.debug1devices`, but runs stage1 until all filesystems that are mounted during initrd are mounted (see [neededForBoot](#opt-fileSystems._name_.neededForBoot)). As a motivating example, this could be useful if you've forgotten to set [neededForBoot](options.html#opt-fileSystems._name_.neededForBoot) on a file system.
: Like `boot.debug1` or `boot.debug1devices`, but runs stage1 until all filesystems that are mounted during initrd are mounted (see [neededForBoot](#opt-fileSystems._name_.neededForBoot)). As a motivating example, this could be useful if you've forgotten to set [neededForBoot](#opt-fileSystems._name_.neededForBoot) on a file system.
`boot.trace`

@ -0,0 +1,62 @@
# Cleaning the Nix Store {#sec-nix-gc}
Nix has a purely functional model, meaning that packages are never
upgraded in place. Instead new versions of packages end up in a
different location in the Nix store (`/nix/store`). You should
periodically run Nix's *garbage collector* to remove old, unreferenced
packages. This is easy:
```ShellSession
$ nix-collect-garbage
```
Alternatively, you can use a systemd unit that does the same in the
background:
```ShellSession
# systemctl start nix-gc.service
```
You can tell NixOS in `configuration.nix` to run this unit automatically
at certain points in time, for instance, every night at 03:15:
```nix
nix.gc.automatic = true;
nix.gc.dates = "03:15";
```
The commands above do not remove garbage collector roots, such as old
system configurations. Thus they do not remove the ability to roll back
to previous configurations. The following command deletes old roots,
removing the ability to roll back to them:
```ShellSession
$ nix-collect-garbage -d
```
You can also do this for specific profiles, e.g.
```ShellSession
$ nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
```
Note that NixOS system configurations are stored in the profile
`/nix/var/nix/profiles/system`.
Another way to reclaim disk space (often as much as 40% of the size of
the Nix store) is to run Nix's store optimiser, which seeks out
identical files in the store and replaces them with hard links to a
single copy.
```ShellSession
$ nix-store --optimise
```
Since this command needs to read the entire Nix store, it can take quite
a while to finish.
## NixOS Boot Entries {#sect-nixos-gc-boot-entries}
If your `/boot` partition runs out of space, after clearing old profiles
you must rebuild your system with `nixos-rebuild` to update the `/boot`
partition and clear space.

@ -1,63 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-nix-gc">
<title>Cleaning the Nix Store</title>
<para>
Nix has a purely functional model, meaning that packages are never upgraded
in place. Instead new versions of packages end up in a different location in
the Nix store (<filename>/nix/store</filename>). You should periodically run
Nix’s <emphasis>garbage collector</emphasis> to remove old, unreferenced
packages. This is easy:
<screen>
<prompt>$ </prompt>nix-collect-garbage
</screen>
Alternatively, you can use a systemd unit that does the same in the
background:
<screen>
<prompt># </prompt>systemctl start nix-gc.service
</screen>
You can tell NixOS in <filename>configuration.nix</filename> to run this unit
automatically at certain points in time, for instance, every night at 03:15:
<programlisting>
<xref linkend="opt-nix.gc.automatic"/> = true;
<xref linkend="opt-nix.gc.dates"/> = "03:15";
</programlisting>
</para>
<para>
The commands above do not remove garbage collector roots, such as old system
configurations. Thus they do not remove the ability to roll back to previous
configurations. The following command deletes old roots, removing the ability
to roll back to them:
<screen>
<prompt>$ </prompt>nix-collect-garbage -d
</screen>
You can also do this for specific profiles, e.g.
<screen>
<prompt>$ </prompt>nix-env -p /nix/var/nix/profiles/per-user/eelco/profile --delete-generations old
</screen>
Note that NixOS system configurations are stored in the profile
<filename>/nix/var/nix/profiles/system</filename>.
</para>
<para>
Another way to reclaim disk space (often as much as 40% of the size of the
Nix store) is to run Nix’s store optimiser, which seeks out identical files
in the store and replaces them with hard links to a single copy.
<screen>
<prompt>$ </prompt>nix-store --optimise
</screen>
Since this command needs to read the entire Nix store, it can take quite a
while to finish.
</para>
<section xml:id="sect-nixos-gc-boot-entries">
<title>NixOS Boot Entries</title>
<para>
If your <filename>/boot</filename> partition runs out of space, after
clearing old profiles you must rebuild your system with
<literal>nixos-rebuild</literal> to update the <filename>/boot</filename>
partition and clear space.
</para>
</section>
</chapter>

@ -0,0 +1,44 @@
# Container Networking {#sec-container-networking}
When you create a container using `nixos-container create`, it gets it
own private IPv4 address in the range `10.233.0.0/16`. You can get the
container's IPv4 address as follows:
```ShellSession
# nixos-container show-ip foo
10.233.4.2
$ ping -c1 10.233.4.2
64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
```
Networking is implemented using a pair of virtual Ethernet devices. The
network interface in the container is called `eth0`, while the matching
interface in the host is called `ve-container-name` (e.g., `ve-foo`).
The container has its own network namespace and the `CAP_NET_ADMIN`
capability, so it can perform arbitrary network configuration such as
setting up firewall rules, without affecting or having access to the
host's network.
By default, containers cannot talk to the outside network. If you want
that, you should set up Network Address Translation (NAT) rules on the
host to rewrite container traffic to use your external IP address. This
can be accomplished using the following configuration on the host:
```nix
networking.nat.enable = true;
networking.nat.internalInterfaces = ["ve-+"];
networking.nat.externalInterface = "eth0";
```
where `eth0` should be replaced with the desired external interface.
Note that `ve-+` is a wildcard that matches all container interfaces.
If you are using Network Manager, you need to explicitly prevent it from
managing container interfaces:
```nix
networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
```
You may need to restart your system for the changes to take effect.

@ -1,59 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-container-networking">
<title>Container Networking</title>
<para>
When you create a container using <literal>nixos-container create</literal>,
it gets it own private IPv4 address in the range
<literal>10.233.0.0/16</literal>. You can get the container’s IPv4 address
as follows:
<screen>
<prompt># </prompt>nixos-container show-ip foo
10.233.4.2
<prompt>$ </prompt>ping -c1 10.233.4.2
64 bytes from 10.233.4.2: icmp_seq=1 ttl=64 time=0.106 ms
</screen>
</para>
<para>
Networking is implemented using a pair of virtual Ethernet devices. The
network interface in the container is called <literal>eth0</literal>, while
the matching interface in the host is called
<literal>ve-<replaceable>container-name</replaceable></literal> (e.g.,
<literal>ve-foo</literal>). The container has its own network namespace and
the <literal>CAP_NET_ADMIN</literal> capability, so it can perform arbitrary
network configuration such as setting up firewall rules, without affecting or
having access to the host’s network.
</para>
<para>
By default, containers cannot talk to the outside network. If you want that,
you should set up Network Address Translation (NAT) rules on the host to
rewrite container traffic to use your external IP address. This can be
accomplished using the following configuration on the host:
<programlisting>
<xref linkend="opt-networking.nat.enable"/> = true;
<xref linkend="opt-networking.nat.internalInterfaces"/> = ["ve-+"];
<xref linkend="opt-networking.nat.externalInterface"/> = "eth0";
</programlisting>
where <literal>eth0</literal> should be replaced with the desired external
interface. Note that <literal>ve-+</literal> is a wildcard that matches all
container interfaces.
</para>
<para>
If you are using Network Manager, you need to explicitly prevent it from
managing container interfaces:
<programlisting>
networking.networkmanager.unmanaged = [ "interface-name:ve-*" ];
</programlisting>
</para>
<para>
You may need to restart your system for the changes to take effect.
</para>
</section>

@ -0,0 +1,28 @@
# Container Management {#ch-containers}
NixOS allows you to easily run other NixOS instances as *containers*.
Containers are a light-weight approach to virtualisation that runs
software in the container at the same speed as in the host system. NixOS
containers share the Nix store of the host, making container creation
very efficient.
::: {.warning}
Currently, NixOS containers are not perfectly isolated from the host
system. This means that a user with root access to the container can do
things that affect the host. So you should not give container root
access to untrusted users.
:::
NixOS containers can be created in two ways: imperatively, using the
command `nixos-container`, and declaratively, by specifying them in your
`configuration.nix`. The declarative approach implies that containers
get upgraded along with your host system when you run `nixos-rebuild`,
which is often not what you want. By contrast, in the imperative
approach, containers are configured and updated independently from the
host system.
```{=docbook}
<xi:include href="imperative-containers.section.xml" />
<xi:include href="declarative-containers.section.xml" />
<xi:include href="container-networking.section.xml" />
```

@ -1,34 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="ch-containers">
<title>Container Management</title>
<para>
NixOS allows you to easily run other NixOS instances as
<emphasis>containers</emphasis>. Containers are a light-weight approach to
virtualisation that runs software in the container at the same speed as in
the host system. NixOS containers share the Nix store of the host, making
container creation very efficient.
</para>
<warning>
<para>
Currently, NixOS containers are not perfectly isolated from the host system.
This means that a user with root access to the container can do things that
affect the host. So you should not give container root access to untrusted
users.
</para>
</warning>
<para>
NixOS containers can be created in two ways: imperatively, using the command
<command>nixos-container</command>, and declaratively, by specifying them in
your <filename>configuration.nix</filename>. The declarative approach implies
that containers get upgraded along with your host system when you run
<command>nixos-rebuild</command>, which is often not what you want. By
contrast, in the imperative approach, containers are configured and updated
independently from the host system.
</para>
<xi:include href="imperative-containers.xml" />
<xi:include href="declarative-containers.xml" />
<xi:include href="container-networking.xml" />
</chapter>

@ -0,0 +1,59 @@
# Control Groups {#sec-cgroups}
To keep track of the processes in a running system, systemd uses
*control groups* (cgroups). A control group is a set of processes used
to allocate resources such as CPU, memory or I/O bandwidth. There can be
multiple control group hierarchies, allowing each kind of resource to be
managed independently.
The command `systemd-cgls` lists all control groups in the `systemd`
hierarchy, which is what systemd uses to keep track of the processes
belonging to each service or user session:
```ShellSession
$ systemd-cgls
├─user
│ └─eelco
│ └─c1
│ ├─ 2567 -:0
│ ├─ 2682 kdeinit4: kdeinit4 Running...
│ ├─ ...
│ └─10851 sh -c less -R
└─system
├─httpd.service
│ ├─2444 httpd -f /nix/store/3pyacby5cpr55a03qwbnndizpciwq161-httpd.conf -DNO_DETACH
│ └─...
├─dhcpcd.service
│ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
└─ ...
```
Similarly, `systemd-cgls cpu` shows the cgroups in the CPU hierarchy,
which allows per-cgroup CPU scheduling priorities. By default, every
systemd service gets its own CPU cgroup, while all user sessions are in
the top-level CPU cgroup. This ensures, for instance, that a thousand
run-away processes in the `httpd.service` cgroup cannot starve the CPU
for one process in the `postgresql.service` cgroup. (By contrast, it
they were in the same cgroup, then the PostgreSQL process would get
1/1001 of the cgroup's CPU time.) You can limit a service's CPU share in
`configuration.nix`:
```nix
systemd.services.httpd.serviceConfig.CPUShares = 512;
```
By default, every cgroup has 1024 CPU shares, so this will halve the CPU
allocation of the `httpd.service` cgroup.
There also is a `memory` hierarchy that controls memory allocation
limits; by default, all processes are in the top-level cgroup, so any
service or session can exhaust all available memory. Per-cgroup memory
limits can be specified in `configuration.nix`; for instance, to limit
`httpd.service` to 512 MiB of RAM (excluding swap):
```nix
systemd.services.httpd.serviceConfig.MemoryLimit = "512M";
```
The command `systemd-cgtop` shows a continuously updated list of all
cgroups with their CPU and memory usage.

@ -1,65 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-cgroups">
<title>Control Groups</title>
<para>
To keep track of the processes in a running system, systemd uses
<emphasis>control groups</emphasis> (cgroups). A control group is a set of
processes used to allocate resources such as CPU, memory or I/O bandwidth.
There can be multiple control group hierarchies, allowing each kind of
resource to be managed independently.
</para>
<para>
The command <command>systemd-cgls</command> lists all control groups in the
<literal>systemd</literal> hierarchy, which is what systemd uses to keep
track of the processes belonging to each service or user session:
<screen>
<prompt>$ </prompt>systemd-cgls
├─user
│ └─eelco
│ └─c1
│ ├─ 2567 -:0
│ ├─ 2682 kdeinit4: kdeinit4 Running...
│ ├─ <replaceable>...</replaceable>
│ └─10851 sh -c less -R
└─system
├─httpd.service
│ ├─2444 httpd -f /nix/store/3pyacby5cpr55a03qwbnndizpciwq161-httpd.conf -DNO_DETACH
│ └─<replaceable>...</replaceable>
├─dhcpcd.service
│ └─2376 dhcpcd --config /nix/store/f8dif8dsi2yaa70n03xir8r653776ka6-dhcpcd.conf
└─ <replaceable>...</replaceable>
</screen>
Similarly, <command>systemd-cgls cpu</command> shows the cgroups in the CPU
hierarchy, which allows per-cgroup CPU scheduling priorities. By default,
every systemd service gets its own CPU cgroup, while all user sessions are in
the top-level CPU cgroup. This ensures, for instance, that a thousand
run-away processes in the <literal>httpd.service</literal> cgroup cannot
starve the CPU for one process in the <literal>postgresql.service</literal>
cgroup. (By contrast, it they were in the same cgroup, then the PostgreSQL
process would get 1/1001 of the cgroup’s CPU time.) You can limit a
service’s CPU share in <filename>configuration.nix</filename>:
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.CPUShares = 512;
</programlisting>
By default, every cgroup has 1024 CPU shares, so this will halve the CPU
allocation of the <literal>httpd.service</literal> cgroup.
</para>
<para>
There also is a <literal>memory</literal> hierarchy that controls memory
allocation limits; by default, all processes are in the top-level cgroup, so
any service or session can exhaust all available memory. Per-cgroup memory
limits can be specified in <filename>configuration.nix</filename>; for
instance, to limit <literal>httpd.service</literal> to 512 MiB of RAM
(excluding swap):
<programlisting>
<link linkend="opt-systemd.services._name_.serviceConfig">systemd.services.httpd.serviceConfig</link>.MemoryLimit = "512M";
</programlisting>
</para>
<para>
The command <command>systemd-cgtop</command> shows a continuously updated
list of all cgroups with their CPU and memory usage.
</para>
</chapter>

@ -0,0 +1,48 @@
# Declarative Container Specification {#sec-declarative-containers}
You can also specify containers and their configuration in the host's
`configuration.nix`. For example, the following specifies that there
shall be a container named `database` running PostgreSQL:
```nix
containers.database =
{ config =
{ config, pkgs, ... }:
{ services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql_9_6;
};
};
```
If you run `nixos-rebuild switch`, the container will be built. If the
container was already running, it will be updated in place, without
rebooting. The container can be configured to start automatically by
setting `containers.database.autoStart = true` in its configuration.
By default, declarative containers share the network namespace of the
host, meaning that they can listen on (privileged) ports. However, they
cannot change the network configuration. You can give a container its
own network as follows:
```nix
containers.database = {
privateNetwork = true;
hostAddress = "192.168.100.10";
localAddress = "192.168.100.11";
};
```
This gives the container a private virtual Ethernet interface with IP
address `192.168.100.11`, which is hooked up to a virtual Ethernet
interface on the host with IP address `192.168.100.10`. (See the next
section for details on container networking.)
To disable the container, just remove it from `configuration.nix` and
run `nixos-rebuild
switch`. Note that this will not delete the root directory of the
container in `/var/lib/containers`. Containers can be destroyed using
the imperative method: `nixos-container destroy foo`.
Declarative containers can be started and stopped using the
corresponding systemd service, e.g.
`systemctl start container@database`.

@ -1,60 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-declarative-containers">
<title>Declarative Container Specification</title>
<para>
You can also specify containers and their configuration in the host’s
<filename>configuration.nix</filename>. For example, the following specifies
that there shall be a container named <literal>database</literal> running
PostgreSQL:
<programlisting>
containers.database =
{ config =
{ config, pkgs, ... }:
{ <xref linkend="opt-services.postgresql.enable"/> = true;
<xref linkend="opt-services.postgresql.package"/> = pkgs.postgresql_9_6;
};
};
</programlisting>
If you run <literal>nixos-rebuild switch</literal>, the container will be
built. If the container was already running, it will be updated in place,
without rebooting. The container can be configured to start automatically by
setting <literal>containers.database.autoStart = true</literal> in its
configuration.
</para>
<para>
By default, declarative containers share the network namespace of the host,
meaning that they can listen on (privileged) ports. However, they cannot
change the network configuration. You can give a container its own network as
follows:
<programlisting>
containers.database = {
<link linkend="opt-containers._name_.privateNetwork">privateNetwork</link> = true;
<link linkend="opt-containers._name_.hostAddress">hostAddress</link> = "192.168.100.10";
<link linkend="opt-containers._name_.localAddress">localAddress</link> = "192.168.100.11";
};
</programlisting>
This gives the container a private virtual Ethernet interface with IP address
<literal>192.168.100.11</literal>, which is hooked up to a virtual Ethernet
interface on the host with IP address <literal>192.168.100.10</literal>. (See
the next section for details on container networking.)
</para>
<para>
To disable the container, just remove it from
<filename>configuration.nix</filename> and run <literal>nixos-rebuild
switch</literal>. Note that this will not delete the root directory of the
container in <literal>/var/lib/containers</literal>. Containers can be
destroyed using the imperative method: <literal>nixos-container destroy
foo</literal>.
</para>
<para>
Declarative containers can be started and stopped using the corresponding
systemd service, e.g. <literal>systemctl start container@database</literal>.
</para>
</section>

@ -0,0 +1,115 @@
# Imperative Container Management {#sec-imperative-containers}
We'll cover imperative container management using `nixos-container`
first. Be aware that container management is currently only possible as
`root`.
You create a container with identifier `foo` as follows:
```ShellSession
# nixos-container create foo
```
This creates the container's root directory in `/var/lib/containers/foo`
and a small configuration file in `/etc/containers/foo.conf`. It also
builds the container's initial system configuration and stores it in
`/nix/var/nix/profiles/per-container/foo/system`. You can modify the
initial configuration of the container on the command line. For
instance, to create a container that has `sshd` running, with the given
public key for `root`:
```ShellSession
# nixos-container create foo --config '
services.openssh.enable = true;
users.users.root.openssh.authorizedKeys.keys = ["ssh-dss AAAAB3N…"];
'
```
By default the next free address in the `10.233.0.0/16` subnet will be
chosen as container IP. This behavior can be altered by setting
`--host-address` and `--local-address`:
```ShellSession
# nixos-container create test --config-file test-container.nix \
--local-address 10.235.1.2 --host-address 10.235.1.1
```
Creating a container does not start it. To start the container, run:
```ShellSession
# nixos-container start foo
```
This command will return as soon as the container has booted and has
reached `multi-user.target`. On the host, the container runs within a
systemd unit called `container@container-name.service`. Thus, if
something went wrong, you can get status info using `systemctl`:
```ShellSession
# systemctl status container@foo
```
If the container has started successfully, you can log in as root using
the `root-login` operation:
```ShellSession
# nixos-container root-login foo
[root@foo:~]#
```
Note that only root on the host can do this (since there is no
authentication). You can also get a regular login prompt using the
`login` operation, which is available to all users on the host:
```ShellSession
# nixos-container login foo
foo login: alice
Password: ***
```
With `nixos-container run`, you can execute arbitrary commands in the
container:
```ShellSession
# nixos-container run foo -- uname -a
Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
```
There are several ways to change the configuration of the container.
First, on the host, you can edit
`/var/lib/container/name/etc/nixos/configuration.nix`, and run
```ShellSession
# nixos-container update foo
```
This will build and activate the new configuration. You can also specify
a new configuration on the command line:
```ShellSession
# nixos-container update foo --config '
services.httpd.enable = true;
services.httpd.adminAddr = "foo@example.org";
networking.firewall.allowedTCPPorts = [ 80 ];
'
# curl http://$(nixos-container show-ip foo)/
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
```
However, note that this will overwrite the container's
`/etc/nixos/configuration.nix`.
Alternatively, you can change the configuration from within the
container itself by running `nixos-rebuild switch` inside the container.
Note that the container by default does not have a copy of the NixOS
channel, so you should run `nix-channel --update` first.
Containers can be stopped and started using `nixos-container
stop` and `nixos-container start`, respectively, or by using
`systemctl` on the container's service unit. To destroy a container,
including its file system, do
```ShellSession
# nixos-container destroy foo
```

@ -1,123 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-imperative-containers">
<title>Imperative Container Management</title>
<para>
We’ll cover imperative container management using
<command>nixos-container</command> first. Be aware that container management
is currently only possible as <literal>root</literal>.
</para>
<para>
You create a container with identifier <literal>foo</literal> as follows:
<screen>
<prompt># </prompt>nixos-container create <replaceable>foo</replaceable>
</screen>
This creates the container’s root directory in
<filename>/var/lib/containers/<replaceable>foo</replaceable></filename> and a small configuration file
in <filename>/etc/containers/<replaceable>foo</replaceable>.conf</filename>. It also builds the
container’s initial system configuration and stores it in
<filename>/nix/var/nix/profiles/per-container/<replaceable>foo</replaceable>/system</filename>. You can
modify the initial configuration of the container on the command line. For
instance, to create a container that has <command>sshd</command> running,
with the given public key for <literal>root</literal>:
<screen>
<prompt># </prompt>nixos-container create <replaceable>foo</replaceable> --config '
<xref linkend="opt-services.openssh.enable"/> = true;
<link linkend="opt-users.users._name_.openssh.authorizedKeys.keys">users.users.root.openssh.authorizedKeys.keys</link> = ["ssh-dss AAAAB3N…"];
'
</screen>
By default the next free address in the <literal>10.233.0.0/16</literal> subnet will be chosen
as container IP. This behavior can be altered by setting <literal>--host-address</literal> and
<literal>--local-address</literal>:
<screen>
<prompt># </prompt>nixos-container create test --config-file test-container.nix \
--local-address 10.235.1.2 --host-address 10.235.1.1
</screen>
</para>
<para>
Creating a container does not start it. To start the container, run:
<screen>
<prompt># </prompt>nixos-container start <replaceable>foo</replaceable>
</screen>
This command will return as soon as the container has booted and has reached
<literal>multi-user.target</literal>. On the host, the container runs within
a systemd unit called
<literal>container@<replaceable>container-name</replaceable>.service</literal>.
Thus, if something went wrong, you can get status info using
<command>systemctl</command>:
<screen>
<prompt># </prompt>systemctl status container@<replaceable>foo</replaceable>
</screen>
</para>
<para>
If the container has started successfully, you can log in as root using the
<command>root-login</command> operation:
<screen>
<prompt># </prompt>nixos-container root-login <replaceable>foo</replaceable>
<prompt>[root@foo:~]#</prompt>
</screen>
Note that only root on the host can do this (since there is no
authentication). You can also get a regular login prompt using the
<command>login</command> operation, which is available to all users on the
host:
<screen>
<prompt># </prompt>nixos-container login <replaceable>foo</replaceable>
foo login: alice
Password: ***
</screen>
With <command>nixos-container run</command>, you can execute arbitrary
commands in the container:
<screen>
<prompt># </prompt>nixos-container run <replaceable>foo</replaceable> -- uname -a
Linux foo 3.4.82 #1-NixOS SMP Thu Mar 20 14:44:05 UTC 2014 x86_64 GNU/Linux
</screen>
</para>
<para>
There are several ways to change the configuration of the container. First,
on the host, you can edit
<literal>/var/lib/container/<replaceable>name</replaceable>/etc/nixos/configuration.nix</literal>,
and run
<screen>
<prompt># </prompt>nixos-container update <replaceable>foo</replaceable>
</screen>
This will build and activate the new configuration. You can also specify a
new configuration on the command line:
<screen>
<prompt># </prompt>nixos-container update <replaceable>foo</replaceable> --config '
<xref linkend="opt-services.httpd.enable"/> = true;
<xref linkend="opt-services.httpd.adminAddr"/> = "foo@example.org";
<xref linkend="opt-networking.firewall.allowedTCPPorts"/> = [ 80 ];
'
<prompt># </prompt>curl http://$(nixos-container show-ip <replaceable>foo</replaceable>)/
&lt;!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">…
</screen>
However, note that this will overwrite the container’s
<filename>/etc/nixos/configuration.nix</filename>.
</para>
<para>
Alternatively, you can change the configuration from within the container
itself by running <command>nixos-rebuild switch</command> inside the
container. Note that the container by default does not have a copy of the
NixOS channel, so you should run <command>nix-channel --update</command>
first.
</para>
<para>
Containers can be stopped and started using <literal>nixos-container
stop</literal> and <literal>nixos-container start</literal>, respectively, or
by using <command>systemctl</command> on the container’s service unit. To
destroy a container, including its file system, do
<screen>
<prompt># </prompt>nixos-container destroy <replaceable>foo</replaceable>
</screen>
</para>
</section>

@ -0,0 +1,38 @@
# Logging {#sec-logging}
System-wide logging is provided by systemd's *journal*, which subsumes
traditional logging daemons such as syslogd and klogd. Log entries are
kept in binary files in `/var/log/journal/`. The command `journalctl`
allows you to see the contents of the journal. For example,
```ShellSession
$ journalctl -b
```
shows all journal entries since the last reboot. (The output of
`journalctl` is piped into `less` by default.) You can use various
options and match operators to restrict output to messages of interest.
For instance, to get all messages from PostgreSQL:
```ShellSession
$ journalctl -u postgresql.service
-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
...
Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down
-- Reboot --
Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG: database system was shut down at 2013-01-07 15:44:14 CET
Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to accept connections
```
Or to get all messages since the last reboot that have at least a
"critical" severity level:
```ShellSession
$ journalctl -b -p crit
Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)
```
The system journal is readable by root and by users in the `wheel` and
`systemd-journal` groups. All users have a private journal that can be
read using `journalctl`.

@ -1,43 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-logging">
<title>Logging</title>
<para>
System-wide logging is provided by systemd’s <emphasis>journal</emphasis>,
which subsumes traditional logging daemons such as syslogd and klogd. Log
entries are kept in binary files in <filename>/var/log/journal/</filename>.
The command <literal>journalctl</literal> allows you to see the contents of
the journal. For example,
<screen>
<prompt>$ </prompt>journalctl -b
</screen>
shows all journal entries since the last reboot. (The output of
<command>journalctl</command> is piped into <command>less</command> by
default.) You can use various options and match operators to restrict output
to messages of interest. For instance, to get all messages from PostgreSQL:
<screen>
<prompt>$ </prompt>journalctl -u postgresql.service
-- Logs begin at Mon, 2013-01-07 13:28:01 CET, end at Tue, 2013-01-08 01:09:57 CET. --
...
Jan 07 15:44:14 hagbard postgres[2681]: [2-1] LOG: database system is shut down
-- Reboot --
Jan 07 15:45:10 hagbard postgres[2532]: [1-1] LOG: database system was shut down at 2013-01-07 15:44:14 CET
Jan 07 15:45:13 hagbard postgres[2500]: [1-1] LOG: database system is ready to accept connections
</screen>
Or to get all messages since the last reboot that have at least a
“critical” severity level:
<screen>
<prompt>$ </prompt>journalctl -b -p crit
Dec 17 21:08:06 mandark sudo[3673]: pam_unix(sudo:auth): auth could not identify password for [alice]
Dec 29 01:30:22 mandark kernel[6131]: [1053513.909444] CPU6: Core temperature above threshold, cpu clock throttled (total events = 1)
</screen>
</para>
<para>
The system journal is readable by root and by users in the
<literal>wheel</literal> and <literal>systemd-journal</literal> groups. All
users have a private journal that can be read using
<command>journalctl</command>.
</para>
</chapter>

@ -0,0 +1,11 @@
# Maintenance Mode {#sec-maintenance-mode}
You can enter rescue mode by running:
```ShellSession
# systemctl rescue
```
This will eventually give you a single-user root shell. Systemd will
stop (almost) all system services. To get out of maintenance mode, just
exit from the rescue shell.

@ -1,16 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-maintenance-mode">
<title>Maintenance Mode</title>
<para>
You can enter rescue mode by running:
<screen>
<prompt># </prompt>systemctl rescue</screen>
This will eventually give you a single-user root shell. Systemd will stop
(almost) all system services. To get out of maintenance mode, just exit from
the rescue shell.
</para>
</section>

@ -0,0 +1,21 @@
# Network Problems {#sec-nix-network-issues}
Nix uses a so-called *binary cache* to optimise building a package from
source into downloading it as a pre-built binary. That is, whenever a
command like `nixos-rebuild` needs a path in the Nix store, Nix will try
to download that path from the Internet rather than build it from
source. The default binary cache is `https://cache.nixos.org/`. If this
cache is unreachable, Nix operations may take a long time due to HTTP
connection timeouts. You can disable the use of the binary cache by
adding `--option use-binary-caches false`, e.g.
```ShellSession
# nixos-rebuild switch --option use-binary-caches false
```
If you have an alternative binary cache at your disposal, you can use it
instead:
```ShellSession
# nixos-rebuild switch --option binary-caches http://my-cache.example.org/
```

@ -1,27 +0,0 @@
<section xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-nix-network-issues">
<title>Network Problems</title>
<para>
Nix uses a so-called <emphasis>binary cache</emphasis> to optimise building a
package from source into downloading it as a pre-built binary. That is,
whenever a command like <command>nixos-rebuild</command> needs a path in the
Nix store, Nix will try to download that path from the Internet rather than
build it from source. The default binary cache is
<uri>https://cache.nixos.org/</uri>. If this cache is unreachable, Nix
operations may take a long time due to HTTP connection timeouts. You can
disable the use of the binary cache by adding <option>--option
use-binary-caches false</option>, e.g.
<screen>
<prompt># </prompt>nixos-rebuild switch --option use-binary-caches false
</screen>
If you have an alternative binary cache at your disposal, you can use it
instead:
<screen>
<prompt># </prompt>nixos-rebuild switch --option binary-caches <replaceable>http://my-cache.example.org/</replaceable>
</screen>
</para>
</section>

@ -0,0 +1,30 @@
# Rebooting and Shutting Down {#sec-rebooting}
The system can be shut down (and automatically powered off) by doing:
```ShellSession
# shutdown
```
This is equivalent to running `systemctl poweroff`.
To reboot the system, run
```ShellSession
# reboot
```
which is equivalent to `systemctl reboot`. Alternatively, you can
quickly reboot the system using `kexec`, which bypasses the BIOS by
directly loading the new kernel into memory:
```ShellSession
# systemctl kexec
```
The machine can be suspended to RAM (if supported) using `systemctl suspend`,
and suspended to disk using `systemctl hibernate`.
These commands can be run by any user who is logged in locally, i.e. on
a virtual console or in X11; otherwise, the user is asked for
authentication.

@ -1,35 +0,0 @@
<chapter xmlns="http://docbook.org/ns/docbook"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:xi="http://www.w3.org/2001/XInclude"
version="5.0"
xml:id="sec-rebooting">
<title>Rebooting and Shutting Down</title>
<para>
The system can be shut down (and automatically powered off) by doing:
<screen>
<prompt># </prompt>shutdown
</screen>
This is equivalent to running <command>systemctl poweroff</command>.
</para>
<para>
To reboot the system, run
<screen>
<prompt># </prompt>reboot
</screen>
which is equivalent to <command>systemctl reboot</command>. Alternatively,
you can quickly reboot the system using <literal>kexec</literal>, which
bypasses the BIOS by directly loading the new kernel into memory:
<screen>
<prompt># </prompt>systemctl kexec
</screen>
</para>
<para>
The machine can be suspended to RAM (if supported) using <command>systemctl
suspend</command>, and suspended to disk using <command>systemctl
hibernate</command>.
</para>
<para>
These commands can be run by any user who is logged in locally, i.e. on a
virtual console or in X11; otherwise, the user is asked for authentication.
</para>
</chapter>

@ -0,0 +1,38 @@
# Rolling Back Configuration Changes {#sec-rollback}
After running `nixos-rebuild` to switch to a new configuration, you may
find that the new configuration doesn't work very well. In that case,
there are several ways to return to a previous configuration.
First, the GRUB boot manager allows you to boot into any previous
configuration that hasn't been garbage-collected. These configurations
can be found under the GRUB submenu "NixOS - All configurations". This
is especially useful if the new configuration fails to boot. After the
system has booted, you can make the selected configuration the default
for subsequent boots:
```ShellSession
# /run/current-system/bin/switch-to-configuration boot
```
Second, you can switch to the previous configuration in a running
system:
```ShellSession
# nixos-rebuild switch --rollback
```
This is equivalent to running:
```ShellSession
# /nix/var/nix/profiles/system-N-link/bin/switch-to-configuration switch
```
where `N` is the number of the NixOS system configuration. To get a
list of the available configurations, do:
```ShellSession
$ ls -l /nix/var/nix/profiles/system-*-link
...
lrwxrwxrwx 1 root root 78 Aug 12 13:54 /nix/var/nix/profiles/system-268-link -> /nix/store/202b...-nixos-13.07pre4932_5a676e4-4be1055
```

Some files were not shown because too many files have changed in this diff Show More

Loading…
Cancel
Save