commit
6d1958ad2d
@ -0,0 +1,298 @@ |
||||
# pkgs.dockerTools {#sec-pkgs-dockerTools} |
||||
|
||||
`pkgs.dockerTools` is a set of functions for creating and manipulating Docker images according to the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120). Docker itself is not used to perform any of the operations done by these functions. |
||||
|
||||
## buildImage {#ssec-pkgs-dockerTools-buildImage} |
||||
|
||||
This function is analogous to the `docker build` command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with `docker load`. |
||||
|
||||
The parameters of `buildImage` with relative example values are described below: |
||||
|
||||
[]{#ex-dockerTools-buildImage} |
||||
[]{#ex-dockerTools-buildImage-runAsRoot} |
||||
|
||||
```nix |
||||
buildImage { |
||||
name = "redis"; |
||||
tag = "latest"; |
||||
|
||||
fromImage = someBaseImage; |
||||
fromImageName = null; |
||||
fromImageTag = "latest"; |
||||
|
||||
contents = pkgs.redis; |
||||
runAsRoot = '' |
||||
#!${pkgs.runtimeShell} |
||||
mkdir -p /data |
||||
''; |
||||
|
||||
config = { |
||||
Cmd = [ "/bin/redis-server" ]; |
||||
WorkingDir = "/data"; |
||||
Volumes = { "/data" = { }; }; |
||||
}; |
||||
} |
||||
``` |
||||
|
||||
The above example will build a Docker image `redis/latest` from the given base image. Loading and running this image in Docker results in `redis-server` being started automatically. |
||||
|
||||
- `name` specifies the name of the resulting image. This is the only required argument for `buildImage`. |
||||
|
||||
- `tag` specifies the tag of the resulting image. By default it\'s `null`, which indicates that the nix output hash will be used as tag. |
||||
|
||||
- `fromImage` is the repository tarball containing the base image. It must be a valid Docker image, such as exported by `docker save`. By default it\'s `null`, which can be seen as equivalent to `FROM scratch` of a `Dockerfile`. |
||||
|
||||
- `fromImageName` can be used to further specify the base image within the repository, in case it contains multiple images. By default it\'s `null`, in which case `buildImage` will peek the first image available in the repository. |
||||
|
||||
- `fromImageTag` can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it\'s `null`, in which case `buildImage` will peek the first tag available for the base image. |
||||
|
||||
- `contents` is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as `ADD contents/ /` in a `Dockerfile`. By default it\'s `null`. |
||||
|
||||
- `runAsRoot` is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied `contents` derivation. This can be similarly seen as `RUN ...` in a `Dockerfile`. |
||||
|
||||
> **_NOTE:_** Using this parameter requires the `kvm` device to be available. |
||||
|
||||
- `config` is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions). |
||||
|
||||
After the new layer has been created, its closure (to which `contents`, `config` and `runAsRoot` contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied. |
||||
|
||||
At the end of the process, only one new single layer will be produced and added to the resulting image. |
||||
|
||||
The resulting repository will only list the single image `image/tag`. In the case of [the `buildImage` example](#ex-dockerTools-buildImage) it would be `redis/latest`. |
||||
|
||||
It is possible to inspect the arguments with which an image was built using its `buildArgs` attribute. |
||||
|
||||
> **_NOTE:_** If you see errors similar to `getProtocolByName: does not exist (no such protocol name: tcp)` you may need to add `pkgs.iana-etc` to `contents`. |
||||
|
||||
> **_NOTE:_** If you see errors similar to `Error_Protocol ("certificate has unknown CA",True,UnknownCa)` you may need to add `pkgs.cacert` to `contents`. |
||||
|
||||
By default `buildImage` will use a static date of one second past the UNIX Epoch. This allows `buildImage` to produce binary reproducible images. When listing images with `docker images`, the newly created images will be listed like this: |
||||
|
||||
```ShellSession |
||||
$ docker images |
||||
REPOSITORY TAG IMAGE ID CREATED SIZE |
||||
hello latest 08c791c7846e 48 years ago 25.2MB |
||||
``` |
||||
|
||||
You can break binary reproducibility but have a sorted, meaningful `CREATED` column by setting `created` to `now`. |
||||
|
||||
```nix |
||||
pkgs.dockerTools.buildImage { |
||||
name = "hello"; |
||||
tag = "latest"; |
||||
created = "now"; |
||||
contents = pkgs.hello; |
||||
|
||||
config.Cmd = [ "/bin/hello" ]; |
||||
} |
||||
``` |
||||
|
||||
and now the Docker CLI will display a reasonable date and sort the images as expected: |
||||
|
||||
```ShellSession |
||||
$ docker images |
||||
REPOSITORY TAG IMAGE ID CREATED SIZE |
||||
hello latest de2bf4786de6 About a minute ago 25.2MB |
||||
``` |
||||
|
||||
however, the produced images will not be binary reproducible. |
||||
|
||||
## buildLayeredImage {#ssec-pkgs-dockerTools-buildLayeredImage} |
||||
|
||||
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use `streamLayeredImage` instead, which this function uses internally. |
||||
|
||||
`name` |
||||
|
||||
: The name of the resulting image. |
||||
|
||||
`tag` _optional_ |
||||
|
||||
: Tag of the generated image. |
||||
|
||||
*Default:* the output path\'s hash |
||||
|
||||
`contents` _optional_ |
||||
|
||||
: Top level paths in the container. Either a single derivation, or a list of derivations. |
||||
|
||||
*Default:* `[]` |
||||
|
||||
`config` _optional_ |
||||
|
||||
: Run-time configuration of the container. A full list of the options are available at in the [ Docker Image Specification v1.2.0 ](https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions). |
||||
|
||||
*Default:* `{}` |
||||
|
||||
`created` _optional_ |
||||
|
||||
: Date and time the layers were created. Follows the same `now` exception supported by `buildImage`. |
||||
|
||||
*Default:* `1970-01-01T00:00:01Z` |
||||
|
||||
`maxLayers` _optional_ |
||||
|
||||
: Maximum number of layers to create. |
||||
|
||||
*Default:* `100` |
||||
|
||||
*Maximum:* `125` |
||||
|
||||
`extraCommands` _optional_ |
||||
|
||||
: Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are \"on top\" of all the other layers, so can create additional directories and files. |
||||
|
||||
### Behavior of `contents` in the final image {#dockerTools-buildLayeredImage-arg-contents} |
||||
|
||||
Each path directly listed in `contents` will have a symlink in the root of the image. |
||||
|
||||
For example: |
||||
|
||||
```nix |
||||
pkgs.dockerTools.buildLayeredImage { |
||||
name = "hello"; |
||||
contents = [ pkgs.hello ]; |
||||
} |
||||
``` |
||||
|
||||
will create symlinks for all the paths in the `hello` package: |
||||
|
||||
```ShellSession |
||||
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello |
||||
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info |
||||
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo |
||||
``` |
||||
|
||||
### Automatic inclusion of `config` references {#dockerTools-buildLayeredImage-arg-config} |
||||
|
||||
The closure of `config` is automatically included in the closure of the final image. |
||||
|
||||
This allows you to make very simple Docker images with very little code. This container will start up and run `hello`: |
||||
|
||||
```nix |
||||
pkgs.dockerTools.buildLayeredImage { |
||||
name = "hello"; |
||||
config.Cmd = [ "${pkgs.hello}/bin/hello" ]; |
||||
} |
||||
``` |
||||
|
||||
### Adjusting `maxLayers` {#dockerTools-buildLayeredImage-arg-maxLayers} |
||||
|
||||
Increasing the `maxLayers` increases the number of layers which have a chance to be shared between different images. |
||||
|
||||
Modern Docker installations support up to 128 layers, however older versions support as few as 42. |
||||
|
||||
If the produced image will not be extended by other Docker builds, it is safe to set `maxLayers` to `128`. However it will be impossible to extend the image further. |
||||
|
||||
The first (`maxLayers-2`) most \"popular\" paths will have their own individual layers, then layer \#`maxLayers-1` will contain all the remaining \"unpopular\" paths, and finally layer \#`maxLayers` will contain the Image configuration. |
||||
|
||||
Docker\'s Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image. |
||||
|
||||
## streamLayeredImage {#ssec-pkgs-dockerTools-streamLayeredImage} |
||||
|
||||
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for `buildLayeredImage`. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images. |
||||
|
||||
The image produced by running the output script can be piped directly into `docker load`, to load it into the local docker daemon: |
||||
|
||||
```ShellSession |
||||
$(nix-build) | docker load |
||||
``` |
||||
|
||||
Alternatively, the image be piped via `gzip` into `skopeo`, e.g. to copy it into a registry: |
||||
|
||||
```ShellSession |
||||
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag |
||||
``` |
||||
|
||||
## pullImage {#ssec-pkgs-dockerTools-fetchFromRegistry} |
||||
|
||||
This function is analogous to the `docker pull` command, in that it can be used to pull a Docker image from a Docker registry. By default [Docker Hub](https://hub.docker.com/) is used to pull images. |
||||
|
||||
Its parameters are described in the example below: |
||||
|
||||
```nix |
||||
pullImage { |
||||
imageName = "nixos/nix"; |
||||
imageDigest = |
||||
"sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; |
||||
finalImageName = "nix"; |
||||
finalImageTag = "1.11"; |
||||
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; |
||||
os = "linux"; |
||||
arch = "x86_64"; |
||||
} |
||||
``` |
||||
|
||||
- `imageName` specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. `nixos`). This argument is required. |
||||
|
||||
- `imageDigest` specifies the digest of the image to be downloaded. This argument is required. |
||||
|
||||
- `finalImageName`, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it\'s equal to `imageName`. |
||||
|
||||
- `finalImageTag`, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it\'s `latest`. |
||||
|
||||
- `sha256` is the checksum of the whole fetched image. This argument is required. |
||||
|
||||
- `os`, if specified, is the operating system of the fetched image. By default it\'s `linux`. |
||||
|
||||
- `arch`, if specified, is the cpu architecture of the fetched image. By default it\'s `x86_64`. |
||||
|
||||
`nix-prefetch-docker` command can be used to get required image parameters: |
||||
|
||||
```ShellSession |
||||
$ nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5 |
||||
``` |
||||
|
||||
Since a given `imageName` may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the `--os` and `--arch` arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on. |
||||
|
||||
```ShellSession |
||||
$ nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux |
||||
``` |
||||
|
||||
Desired image name and tag can be set using `--final-image-name` and `--final-image-tag` arguments: |
||||
|
||||
```ShellSession |
||||
$ nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod |
||||
``` |
||||
|
||||
## exportImage {#ssec-pkgs-dockerTools-exportImage} |
||||
|
||||
This function is analogous to the `docker export` command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with `docker import`. |
||||
|
||||
> **_NOTE:_** Using this function requires the `kvm` device to be available. |
||||
|
||||
The parameters of `exportImage` are the following: |
||||
|
||||
```nix |
||||
exportImage { |
||||
fromImage = someLayeredImage; |
||||
fromImageName = null; |
||||
fromImageTag = null; |
||||
|
||||
name = someLayeredImage.name; |
||||
} |
||||
``` |
||||
|
||||
The parameters relative to the base image have the same synopsis as described in [buildImage](#ssec-pkgs-dockerTools-buildImage), except that `fromImage` is the only required argument in this case. |
||||
|
||||
The `name` argument is the name of the derivation output, which defaults to `fromImage.name`. |
||||
|
||||
## shadowSetup {#ssec-pkgs-dockerTools-shadowSetup} |
||||
|
||||
This constant string is a helper for setting up the base files for managing users and groups, only if such files don\'t exist already. It is suitable for being used in a [`buildImage` `runAsRoot`](#ex-dockerTools-buildImage-runAsRoot) script for cases like in the example below: |
||||
|
||||
```nix |
||||
buildImage { |
||||
name = "shadow-basic"; |
||||
|
||||
runAsRoot = '' |
||||
#!${pkgs.runtimeShell} |
||||
${shadowSetup} |
||||
groupadd -r redis |
||||
useradd -r -g redis redis |
||||
mkdir /data |
||||
chown redis:redis /data |
||||
''; |
||||
} |
||||
``` |
||||
|
||||
Creating base files like `/etc/passwd` or `/etc/login.defs` is necessary for shadow-utils to manipulate users and groups. |
@ -1,499 +0,0 @@ |
||||
<section xmlns="http://docbook.org/ns/docbook" |
||||
xmlns:xlink="http://www.w3.org/1999/xlink" |
||||
xmlns:xi="http://www.w3.org/2001/XInclude" |
||||
xml:id="sec-pkgs-dockerTools"> |
||||
<title>pkgs.dockerTools</title> |
||||
|
||||
<para> |
||||
<varname>pkgs.dockerTools</varname> is a set of functions for creating and manipulating Docker images according to the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#docker-image-specification-v120"> Docker Image Specification v1.2.0 </link>. Docker itself is not used to perform any of the operations done by these functions. |
||||
</para> |
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-buildImage"> |
||||
<title>buildImage</title> |
||||
|
||||
<para> |
||||
This function is analogous to the <command>docker build</command> command, in that it can be used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with <command>docker load</command>. |
||||
</para> |
||||
|
||||
<para> |
||||
The parameters of <varname>buildImage</varname> with relative example values are described below: |
||||
</para> |
||||
|
||||
<example xml:id='ex-dockerTools-buildImage'> |
||||
<title>Docker build</title> |
||||
<programlisting> |
||||
buildImage { |
||||
name = "redis"; <co xml:id='ex-dockerTools-buildImage-1' /> |
||||
tag = "latest"; <co xml:id='ex-dockerTools-buildImage-2' /> |
||||
|
||||
fromImage = someBaseImage; <co xml:id='ex-dockerTools-buildImage-3' /> |
||||
fromImageName = null; <co xml:id='ex-dockerTools-buildImage-4' /> |
||||
fromImageTag = "latest"; <co xml:id='ex-dockerTools-buildImage-5' /> |
||||
|
||||
contents = pkgs.redis; <co xml:id='ex-dockerTools-buildImage-6' /> |
||||
runAsRoot = '' <co xml:id='ex-dockerTools-buildImage-runAsRoot' /> |
||||
#!${pkgs.runtimeShell} |
||||
mkdir -p /data |
||||
''; |
||||
|
||||
config = { <co xml:id='ex-dockerTools-buildImage-8' /> |
||||
Cmd = [ "/bin/redis-server" ]; |
||||
WorkingDir = "/data"; |
||||
Volumes = { |
||||
"/data" = {}; |
||||
}; |
||||
}; |
||||
} |
||||
</programlisting> |
||||
</example> |
||||
|
||||
<para> |
||||
The above example will build a Docker image <literal>redis/latest</literal> from the given base image. Loading and running this image in Docker results in <literal>redis-server</literal> being started automatically. |
||||
</para> |
||||
|
||||
<calloutlist> |
||||
<callout arearefs='ex-dockerTools-buildImage-1'> |
||||
<para> |
||||
<varname>name</varname> specifies the name of the resulting image. This is the only required argument for <varname>buildImage</varname>. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-2'> |
||||
<para> |
||||
<varname>tag</varname> specifies the tag of the resulting image. By default it's <literal>null</literal>, which indicates that the nix output hash will be used as tag. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-3'> |
||||
<para> |
||||
<varname>fromImage</varname> is the repository tarball containing the base image. It must be a valid Docker image, such as exported by <command>docker save</command>. By default it's <literal>null</literal>, which can be seen as equivalent to <literal>FROM scratch</literal> of a <filename>Dockerfile</filename>. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-4'> |
||||
<para> |
||||
<varname>fromImageName</varname> can be used to further specify the base image within the repository, in case it contains multiple images. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first image available in the repository. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-5'> |
||||
<para> |
||||
<varname>fromImageTag</varname> can be used to further specify the tag of the base image within the repository, in case an image contains multiple tags. By default it's <literal>null</literal>, in which case <varname>buildImage</varname> will peek the first tag available for the base image. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-6'> |
||||
<para> |
||||
<varname>contents</varname> is a derivation that will be copied in the new layer of the resulting image. This can be similarly seen as <command>ADD contents/ /</command> in a <filename>Dockerfile</filename>. By default it's <literal>null</literal>. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-runAsRoot'> |
||||
<para> |
||||
<varname>runAsRoot</varname> is a bash script that will run as root in an environment that overlays the existing layers of the base image with the new resulting layer, including the previously copied <varname>contents</varname> derivation. This can be similarly seen as <command>RUN ...</command> in a <filename>Dockerfile</filename>. |
||||
<note> |
||||
<para> |
||||
Using this parameter requires the <literal>kvm</literal> device to be available. |
||||
</para> |
||||
</note> |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-buildImage-8'> |
||||
<para> |
||||
<varname>config</varname> is used to specify the configuration of the containers that will be started off the built image in Docker. The available options are listed in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>. |
||||
</para> |
||||
</callout> |
||||
</calloutlist> |
||||
|
||||
<para> |
||||
After the new layer has been created, its closure (to which <varname>contents</varname>, <varname>config</varname> and <varname>runAsRoot</varname> contribute) will be copied in the layer itself. Only new dependencies that are not already in the existing layers will be copied. |
||||
</para> |
||||
|
||||
<para> |
||||
At the end of the process, only one new single layer will be produced and added to the resulting image. |
||||
</para> |
||||
|
||||
<para> |
||||
The resulting repository will only list the single image <varname>image/tag</varname>. In the case of <xref linkend='ex-dockerTools-buildImage'/> it would be <varname>redis/latest</varname>. |
||||
</para> |
||||
|
||||
<para> |
||||
It is possible to inspect the arguments with which an image was built using its <varname>buildArgs</varname> attribute. |
||||
</para> |
||||
|
||||
<note> |
||||
<para> |
||||
If you see errors similar to <literal>getProtocolByName: does not exist (no such protocol name: tcp)</literal> you may need to add <literal>pkgs.iana-etc</literal> to <varname>contents</varname>. |
||||
</para> |
||||
</note> |
||||
|
||||
<note> |
||||
<para> |
||||
If you see errors similar to <literal>Error_Protocol ("certificate has unknown CA",True,UnknownCa)</literal> you may need to add <literal>pkgs.cacert</literal> to <varname>contents</varname>. |
||||
</para> |
||||
</note> |
||||
|
||||
<example xml:id="example-pkgs-dockerTools-buildImage-creation-date"> |
||||
<title>Impurely Defining a Docker Layer's Creation Date</title> |
||||
<para> |
||||
By default <function>buildImage</function> will use a static date of one second past the UNIX Epoch. This allows <function>buildImage</function> to produce binary reproducible images. When listing images with <command>docker images</command>, the newly created images will be listed like this: |
||||
</para> |
||||
<screen> |
||||
<prompt>$ </prompt>docker images |
||||
REPOSITORY TAG IMAGE ID CREATED SIZE |
||||
hello latest 08c791c7846e 48 years ago 25.2MB |
||||
</screen> |
||||
<para> |
||||
You can break binary reproducibility but have a sorted, meaningful <literal>CREATED</literal> column by setting <literal>created</literal> to <literal>now</literal>. |
||||
</para> |
||||
<programlisting><![CDATA[ |
||||
pkgs.dockerTools.buildImage { |
||||
name = "hello"; |
||||
tag = "latest"; |
||||
created = "now"; |
||||
contents = pkgs.hello; |
||||
|
||||
config.Cmd = [ "/bin/hello" ]; |
||||
} |
||||
]]></programlisting> |
||||
<para> |
||||
and now the Docker CLI will display a reasonable date and sort the images as expected: |
||||
<screen> |
||||
<prompt>$ </prompt>docker images |
||||
REPOSITORY TAG IMAGE ID CREATED SIZE |
||||
hello latest de2bf4786de6 About a minute ago 25.2MB |
||||
</screen> |
||||
however, the produced images will not be binary reproducible. |
||||
</para> |
||||
</example> |
||||
</section> |
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-buildLayeredImage"> |
||||
<title>buildLayeredImage</title> |
||||
|
||||
<para> |
||||
Create a Docker image with many of the store paths being on their own layer to improve sharing between images. The image is realized into the Nix store as a gzipped tarball. Depending on the intended usage, many users might prefer to use <function>streamLayeredImage</function> instead, which this function uses internally. |
||||
</para> |
||||
|
||||
<variablelist> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>name</varname> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
The name of the resulting image. |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>tag</varname> <emphasis>optional</emphasis> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
Tag of the generated image. |
||||
</para> |
||||
<para> |
||||
<emphasis>Default:</emphasis> the output path's hash |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>contents</varname> <emphasis>optional</emphasis> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
Top level paths in the container. Either a single derivation, or a list of derivations. |
||||
</para> |
||||
<para> |
||||
<emphasis>Default:</emphasis> <literal>[]</literal> |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>config</varname> <emphasis>optional</emphasis> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
Run-time configuration of the container. A full list of the options are available at in the <link xlink:href="https://github.com/moby/moby/blob/master/image/spec/v1.2.md#image-json-field-descriptions"> Docker Image Specification v1.2.0 </link>. |
||||
</para> |
||||
<para> |
||||
<emphasis>Default:</emphasis> <literal>{}</literal> |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>created</varname> <emphasis>optional</emphasis> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
Date and time the layers were created. Follows the same <literal>now</literal> exception supported by <literal>buildImage</literal>. |
||||
</para> |
||||
<para> |
||||
<emphasis>Default:</emphasis> <literal>1970-01-01T00:00:01Z</literal> |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>maxLayers</varname> <emphasis>optional</emphasis> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
Maximum number of layers to create. |
||||
</para> |
||||
<para> |
||||
<emphasis>Default:</emphasis> <literal>100</literal> |
||||
</para> |
||||
<para> |
||||
<emphasis>Maximum:</emphasis> <literal>125</literal> |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
<varlistentry> |
||||
<term> |
||||
<varname>extraCommands</varname> <emphasis>optional</emphasis> |
||||
</term> |
||||
<listitem> |
||||
<para> |
||||
Shell commands to run while building the final layer, without access to most of the layer contents. Changes to this layer are "on top" of all the other layers, so can create additional directories and files. |
||||
</para> |
||||
</listitem> |
||||
</varlistentry> |
||||
</variablelist> |
||||
|
||||
<section xml:id="dockerTools-buildLayeredImage-arg-contents"> |
||||
<title>Behavior of <varname>contents</varname> in the final image</title> |
||||
|
||||
<para> |
||||
Each path directly listed in <varname>contents</varname> will have a symlink in the root of the image. |
||||
</para> |
||||
|
||||
<para> |
||||
For example: |
||||
<programlisting><![CDATA[ |
||||
pkgs.dockerTools.buildLayeredImage { |
||||
name = "hello"; |
||||
contents = [ pkgs.hello ]; |
||||
} |
||||
]]></programlisting> |
||||
will create symlinks for all the paths in the <literal>hello</literal> package: |
||||
<screen><![CDATA[ |
||||
/bin/hello -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/bin/hello |
||||
/share/info/hello.info -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/info/hello.info |
||||
/share/locale/bg/LC_MESSAGES/hello.mo -> /nix/store/h1zb1padqbbb7jicsvkmrym3r6snphxg-hello-2.10/share/locale/bg/LC_MESSAGES/hello.mo |
||||
]]></screen> |
||||
</para> |
||||
</section> |
||||
|
||||
<section xml:id="dockerTools-buildLayeredImage-arg-config"> |
||||
<title>Automatic inclusion of <varname>config</varname> references</title> |
||||
|
||||
<para> |
||||
The closure of <varname>config</varname> is automatically included in the closure of the final image. |
||||
</para> |
||||
|
||||
<para> |
||||
This allows you to make very simple Docker images with very little code. This container will start up and run <command>hello</command>: |
||||
<programlisting><![CDATA[ |
||||
pkgs.dockerTools.buildLayeredImage { |
||||
name = "hello"; |
||||
config.Cmd = [ "${pkgs.hello}/bin/hello" ]; |
||||
} |
||||
]]></programlisting> |
||||
</para> |
||||
</section> |
||||
|
||||
<section xml:id="dockerTools-buildLayeredImage-arg-maxLayers"> |
||||
<title>Adjusting <varname>maxLayers</varname></title> |
||||
|
||||
<para> |
||||
Increasing the <varname>maxLayers</varname> increases the number of layers which have a chance to be shared between different images. |
||||
</para> |
||||
|
||||
<para> |
||||
Modern Docker installations support up to 128 layers, however older versions support as few as 42. |
||||
</para> |
||||
|
||||
<para> |
||||
If the produced image will not be extended by other Docker builds, it is safe to set <varname>maxLayers</varname> to <literal>128</literal>. However it will be impossible to extend the image further. |
||||
</para> |
||||
|
||||
<para> |
||||
The first (<literal>maxLayers-2</literal>) most "popular" paths will have their own individual layers, then layer #<literal>maxLayers-1</literal> will contain all the remaining "unpopular" paths, and finally layer #<literal>maxLayers</literal> will contain the Image configuration. |
||||
</para> |
||||
|
||||
<para> |
||||
Docker's Layers are not inherently ordered, they are content-addressable and are not explicitly layered until they are composed in to an Image. |
||||
</para> |
||||
</section> |
||||
</section> |
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-streamLayeredImage"> |
||||
<title>streamLayeredImage</title> |
||||
|
||||
<para> |
||||
Builds a script which, when run, will stream an uncompressed tarball of a Docker image to stdout. The arguments to this function are as for <function>buildLayeredImage</function>. This method of constructing an image does not realize the image into the Nix store, so it saves on IO and disk/cache space, particularly with large images. |
||||
</para> |
||||
|
||||
<para> |
||||
The image produced by running the output script can be piped directly into <command>docker load</command>, to load it into the local docker daemon: |
||||
<screen><![CDATA[ |
||||
$(nix-build) | docker load |
||||
]]></screen> |
||||
</para> |
||||
<para> |
||||
Alternatively, the image be piped via <command>gzip</command> into <command>skopeo</command>, e.g. to copy it into a registry: |
||||
<screen><![CDATA[ |
||||
$(nix-build) | gzip --fast | skopeo copy docker-archive:/dev/stdin docker://some_docker_registry/myimage:tag |
||||
]]></screen> |
||||
</para> |
||||
</section> |
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-fetchFromRegistry"> |
||||
<title>pullImage</title> |
||||
|
||||
<para> |
||||
This function is analogous to the <command>docker pull</command> command, in that it can be used to pull a Docker image from a Docker registry. By default <link xlink:href="https://hub.docker.com/">Docker Hub</link> is used to pull images. |
||||
</para> |
||||
|
||||
<para> |
||||
Its parameters are described in the example below: |
||||
</para> |
||||
|
||||
<example xml:id='ex-dockerTools-pullImage'> |
||||
<title>Docker pull</title> |
||||
<programlisting> |
||||
pullImage { |
||||
imageName = "nixos/nix"; <co xml:id='ex-dockerTools-pullImage-1' /> |
||||
imageDigest = "sha256:20d9485b25ecfd89204e843a962c1bd70e9cc6858d65d7f5fadc340246e2116b"; <co xml:id='ex-dockerTools-pullImage-2' /> |
||||
finalImageName = "nix"; <co xml:id='ex-dockerTools-pullImage-3' /> |
||||
finalImageTag = "1.11"; <co xml:id='ex-dockerTools-pullImage-4' /> |
||||
sha256 = "0mqjy3zq2v6rrhizgb9nvhczl87lcfphq9601wcprdika2jz7qh8"; <co xml:id='ex-dockerTools-pullImage-5' /> |
||||
os = "linux"; <co xml:id='ex-dockerTools-pullImage-6' /> |
||||
arch = "x86_64"; <co xml:id='ex-dockerTools-pullImage-7' /> |
||||
} |
||||
</programlisting> |
||||
</example> |
||||
|
||||
<calloutlist> |
||||
<callout arearefs='ex-dockerTools-pullImage-1'> |
||||
<para> |
||||
<varname>imageName</varname> specifies the name of the image to be downloaded, which can also include the registry namespace (e.g. <literal>nixos</literal>). This argument is required. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-pullImage-2'> |
||||
<para> |
||||
<varname>imageDigest</varname> specifies the digest of the image to be downloaded. This argument is required. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-pullImage-3'> |
||||
<para> |
||||
<varname>finalImageName</varname>, if specified, this is the name of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's equal to <varname>imageName</varname>. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-pullImage-4'> |
||||
<para> |
||||
<varname>finalImageTag</varname>, if specified, this is the tag of the image to be created. Note it is never used to fetch the image since we prefer to rely on the immutable digest ID. By default it's <literal>latest</literal>. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-pullImage-5'> |
||||
<para> |
||||
<varname>sha256</varname> is the checksum of the whole fetched image. This argument is required. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-pullImage-6'> |
||||
<para> |
||||
<varname>os</varname>, if specified, is the operating system of the fetched image. By default it's <literal>linux</literal>. |
||||
</para> |
||||
</callout> |
||||
<callout arearefs='ex-dockerTools-pullImage-7'> |
||||
<para> |
||||
<varname>arch</varname>, if specified, is the cpu architecture of the fetched image. By default it's <literal>x86_64</literal>. |
||||
</para> |
||||
</callout> |
||||
</calloutlist> |
||||
|
||||
<para> |
||||
<literal>nix-prefetch-docker</literal> command can be used to get required image parameters: |
||||
<screen> |
||||
<prompt>$ </prompt>nix run nixpkgs.nix-prefetch-docker -c nix-prefetch-docker --image-name mysql --image-tag 5 |
||||
</screen> |
||||
Since a given <varname>imageName</varname> may transparently refer to a manifest list of images which support multiple architectures and/or operating systems, you can supply the <option>--os</option> and <option>--arch</option> arguments to specify exactly which image you want. By default it will match the OS and architecture of the host the command is run on. |
||||
<screen> |
||||
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --arch x86_64 --os linux |
||||
</screen> |
||||
Desired image name and tag can be set using <option>--final-image-name</option> and <option>--final-image-tag</option> arguments: |
||||
<screen> |
||||
<prompt>$ </prompt>nix-prefetch-docker --image-name mysql --image-tag 5 --final-image-name eu.gcr.io/my-project/mysql --final-image-tag prod |
||||
</screen> |
||||
</para> |
||||
</section> |
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-exportImage"> |
||||
<title>exportImage</title> |
||||
|
||||
<para> |
||||
This function is analogous to the <command>docker export</command> command, in that it can be used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with <command>docker import</command>. |
||||
</para> |
||||
|
||||
<note> |
||||
<para> |
||||
Using this function requires the <literal>kvm</literal> device to be available. |
||||
</para> |
||||
</note> |
||||
|
||||
<para> |
||||
The parameters of <varname>exportImage</varname> are the following: |
||||
</para> |
||||
|
||||
<example xml:id='ex-dockerTools-exportImage'> |
||||
<title>Docker export</title> |
||||
<programlisting> |
||||
exportImage { |
||||
fromImage = someLayeredImage; |
||||
fromImageName = null; |
||||
fromImageTag = null; |
||||
|
||||
name = someLayeredImage.name; |
||||
} |
||||
</programlisting> |
||||
</example> |
||||
|
||||
<para> |
||||
The parameters relative to the base image have the same synopsis as described in <xref linkend='ssec-pkgs-dockerTools-buildImage'/>, except that <varname>fromImage</varname> is the only required argument in this case. |
||||
</para> |
||||
|
||||
<para> |
||||
The <varname>name</varname> argument is the name of the derivation output, which defaults to <varname>fromImage.name</varname>. |
||||
</para> |
||||
</section> |
||||
|
||||
<section xml:id="ssec-pkgs-dockerTools-shadowSetup"> |
||||
<title>shadowSetup</title> |
||||
|
||||
<para> |
||||
This constant string is a helper for setting up the base files for managing users and groups, only if such files don't exist already. It is suitable for being used in a <varname>runAsRoot</varname> <xref linkend='ex-dockerTools-buildImage-runAsRoot'/> script for cases like in the example below: |
||||
</para> |
||||
|
||||
<example xml:id='ex-dockerTools-shadowSetup'> |
||||
<title>Shadow base files</title> |
||||
<programlisting> |
||||
buildImage { |
||||
name = "shadow-basic"; |
||||
|
||||
runAsRoot = '' |
||||
#!${pkgs.runtimeShell} |
||||
${shadowSetup} |
||||
groupadd -r redis |
||||
useradd -r -g redis redis |
||||
mkdir /data |
||||
chown redis:redis /data |
||||
''; |
||||
} |
||||
</programlisting> |
||||
</example> |
||||
|
||||
<para> |
||||
Creating base files like <literal>/etc/passwd</literal> or <literal>/etc/login.defs</literal> is necessary for shadow-utils to manipulate users and groups. |
||||
</para> |
||||
</section> |
||||
</section> |
@ -0,0 +1,164 @@ |
||||
{ config, pkgs, lib, ... }: |
||||
|
||||
with lib; |
||||
let |
||||
cfg = config.services.lifecycled; |
||||
|
||||
# TODO: Add the ability to extend this with an rfc 42-like interface. |
||||
# In the meantime, one can modify the environment (as |
||||
# long as it's not overriding anything from here) with |
||||
# systemd.services.lifecycled.serviceConfig.Environment |
||||
configFile = pkgs.writeText "lifecycled" '' |
||||
LIFECYCLED_HANDLER=${cfg.handler} |
||||
${lib.optionalString (cfg.cloudwatchGroup != null) "LIFECYCLED_CLOUDWATCH_GROUP=${cfg.cloudwatchGroup}"} |
||||
${lib.optionalString (cfg.cloudwatchStream != null) "LIFECYCLED_CLOUDWATCH_STREAM=${cfg.cloudwatchStream}"} |
||||
${lib.optionalString cfg.debug "LIFECYCLED_DEBUG=${lib.boolToString cfg.debug}"} |
||||
${lib.optionalString (cfg.instanceId != null) "LIFECYCLED_INSTANCE_ID=${cfg.instanceId}"} |
||||
${lib.optionalString cfg.json "LIFECYCLED_JSON=${lib.boolToString cfg.json}"} |
||||
${lib.optionalString cfg.noSpot "LIFECYCLED_NO_SPOT=${lib.boolToString cfg.noSpot}"} |
||||
${lib.optionalString (cfg.snsTopic != null) "LIFECYCLED_SNS_TOPIC=${cfg.snsTopic}"} |
||||
${lib.optionalString (cfg.awsRegion != null) "AWS_REGION=${cfg.awsRegion}"} |
||||
''; |
||||
in |
||||
{ |
||||
meta.maintainers = with maintainers; [ cole-h grahamc ]; |
||||
|
||||
options = { |
||||
services.lifecycled = { |
||||
enable = mkEnableOption "lifecycled"; |
||||
|
||||
queueCleaner = { |
||||
enable = mkEnableOption "lifecycled-queue-cleaner"; |
||||
|
||||
frequency = mkOption { |
||||
type = types.str; |
||||
default = "hourly"; |
||||
description = '' |
||||
How often to trigger the queue cleaner. |
||||
|
||||
NOTE: This string should be a valid value for a systemd |
||||
timer's <literal>OnCalendar</literal> configuration. See |
||||
<citerefentry><refentrytitle>systemd.timer</refentrytitle><manvolnum>5</manvolnum></citerefentry> |
||||
for more information. |
||||
''; |
||||
}; |
||||
|
||||
parallel = mkOption { |
||||
type = types.ints.unsigned; |
||||
default = 20; |
||||
description = '' |
||||
The number of parallel deletes to run. |
||||
''; |
||||
}; |
||||
}; |
||||
|
||||
instanceId = mkOption { |
||||
type = types.nullOr types.str; |
||||
default = null; |
||||
description = '' |
||||
The instance ID to listen for events for. |
||||
''; |
||||
}; |
||||
|
||||
snsTopic = mkOption { |
||||
type = types.nullOr types.str; |
||||
default = null; |
||||
description = '' |
||||
The SNS topic that receives events. |
||||
''; |
||||
}; |
||||
|
||||
noSpot = mkOption { |
||||
type = types.bool; |
||||
default = false; |
||||
description = '' |
||||
Disable the spot termination listener. |
||||
''; |
||||
}; |
||||
|
||||
handler = mkOption { |
||||
type = types.path; |
||||
description = '' |
||||
The script to invoke to handle events. |
||||
''; |
||||
}; |
||||
|
||||
json = mkOption { |
||||
type = types.bool; |
||||
default = false; |
||||
description = '' |
||||
Enable JSON logging. |
||||
''; |
||||
}; |
||||
|
||||
cloudwatchGroup = mkOption { |
||||
type = types.nullOr types.str; |
||||
default = null; |
||||
description = '' |
||||
Write logs to a specific Cloudwatch Logs group. |
||||
''; |
||||
}; |
||||
|
||||
cloudwatchStream = mkOption { |
||||
type = types.nullOr types.str; |
||||
default = null; |
||||
description = '' |
||||
Write logs to a specific Cloudwatch Logs stream. Defaults to the instance ID. |
||||
''; |
||||
}; |
||||
|
||||
debug = mkOption { |
||||
type = types.bool; |
||||
default = false; |
||||
description = '' |
||||
Enable debugging information. |
||||
''; |
||||
}; |
||||
|
||||
# XXX: Can be removed if / when |
||||
# https://github.com/buildkite/lifecycled/pull/91 is merged. |
||||
awsRegion = mkOption { |
||||
type = types.nullOr types.str; |
||||
default = null; |
||||
description = '' |
||||
The region used for accessing AWS services. |
||||
''; |
||||
}; |
||||
}; |
||||
}; |
||||
|
||||
### Implementation ### |
||||
|
||||
config = mkMerge [ |
||||
(mkIf cfg.enable { |
||||
environment.etc."lifecycled".source = configFile; |
||||
|
||||
systemd.packages = [ pkgs.lifecycled ]; |
||||
systemd.services.lifecycled = { |
||||
wantedBy = [ "network-online.target" ]; |
||||
restartTriggers = [ configFile ]; |
||||
}; |
||||
}) |
||||
|
||||
(mkIf cfg.queueCleaner.enable { |
||||
systemd.services.lifecycled-queue-cleaner = { |
||||
description = "Lifecycle Daemon Queue Cleaner"; |
||||
environment = optionalAttrs (cfg.awsRegion != null) { AWS_REGION = cfg.awsRegion; }; |
||||
serviceConfig = { |
||||
Type = "oneshot"; |
||||
ExecStart = "${pkgs.lifecycled}/bin/lifecycled-queue-cleaner -parallel ${toString cfg.queueCleaner.parallel}"; |
||||
}; |
||||
}; |
||||
|
||||
systemd.timers.lifecycled-queue-cleaner = { |
||||
description = "Lifecycle Daemon Queue Cleaner Timer"; |
||||
wantedBy = [ "timers.target" ]; |
||||
after = [ "network-online.target" ]; |
||||
timerConfig = { |
||||
Unit = "lifecycled-queue-cleaner.service"; |
||||
OnCalendar = "${cfg.queueCleaner.frequency}"; |
||||
}; |
||||
}; |
||||
}) |
||||
]; |
||||
} |
@ -0,0 +1,82 @@ |
||||
{ config, pkgs, lib, ... }: |
||||
|
||||
with lib; |
||||
|
||||
let |
||||
cfg = config.services.plikd; |
||||
|
||||
format = pkgs.formats.toml {}; |
||||
plikdCfg = format.generate "plikd.cfg" cfg.settings; |
||||
in |
||||
{ |
||||
options = { |
||||
services.plikd = { |
||||
enable = mkEnableOption "the plikd server"; |
||||
|
||||
openFirewall = mkOption { |
||||
type = types.bool; |
||||
default = false; |
||||
description = "Open ports in the firewall for the plikd."; |
||||
}; |
||||
|
||||
settings = mkOption { |
||||
type = format.type; |
||||
default = {}; |
||||
description = '' |
||||
Configuration for plikd, see <link xlink:href="https://github.com/root-gg/plik/blob/master/server/plikd.cfg"/> |
||||
for supported values. |
||||
''; |
||||
}; |
||||
}; |
||||
}; |
||||
|
||||
config = mkIf cfg.enable { |
||||
services.plikd.settings = mapAttrs (name: mkDefault) { |
||||
ListenPort = 8080; |
||||
ListenAddress = "localhost"; |
||||
DataBackend = "file"; |
||||
DataBackendConfig = { |
||||
Directory = "/var/lib/plikd"; |
||||
}; |
||||
MetadataBackendConfig = { |
||||
Driver = "sqlite3"; |
||||
ConnectionString = "/var/lib/plikd/plik.db"; |
||||
}; |
||||
}; |
||||
|
||||
systemd.services.plikd = { |
||||
description = "Plikd file sharing server"; |
||||
after = [ "network.target" ]; |
||||
wantedBy = [ "multi-user.target" ]; |
||||
serviceConfig = { |
||||
Type = "simple"; |
||||
ExecStart = "${pkgs.plikd}/bin/plikd --config ${plikdCfg}"; |
||||
Restart = "on-failure"; |
||||
StateDirectory = "plikd"; |
||||
LogsDirectory = "plikd"; |
||||
DynamicUser = true; |
||||
|
||||
# Basic hardening |
||||
NoNewPrivileges = "yes"; |
||||
PrivateTmp = "yes"; |
||||
PrivateDevices = "yes"; |
||||
DevicePolicy = "closed"; |
||||
ProtectSystem = "strict"; |
||||
ProtectHome = "read-only"; |
||||
ProtectControlGroups = "yes"; |
||||
ProtectKernelModules = "yes"; |
||||
ProtectKernelTunables = "yes"; |
||||
RestrictAddressFamilies = "AF_UNIX AF_INET AF_INET6 AF_NETLINK"; |
||||
RestrictNamespaces = "yes"; |
||||
RestrictRealtime = "yes"; |
||||
RestrictSUIDSGID = "yes"; |
||||
MemoryDenyWriteExecute = "yes"; |
||||
LockPersonality = "yes"; |
||||
}; |
||||
}; |
||||
|
||||
networking.firewall = mkIf cfg.openFirewall { |
||||
allowedTCPPorts = [ cfg.settings.ListenPort ]; |
||||
}; |
||||
}; |
||||
} |
@ -0,0 +1,27 @@ |
||||
import ./make-test-python.nix ({ lib, ... }: { |
||||
name = "plikd"; |
||||
meta = with lib.maintainers; { |
||||
maintainers = [ freezeboy ]; |
||||
}; |
||||
|
||||
machine = { pkgs, ... }: let |
||||
in { |
||||
services.plikd.enable = true; |
||||
environment.systemPackages = [ pkgs.plik ]; |
||||
}; |
||||
|
||||
testScript = '' |
||||
# Service basic test |
||||
machine.wait_for_unit("plikd") |
||||
|
||||
# Network test |
||||
machine.wait_for_open_port("8080") |
||||
machine.succeed("curl --fail -v http://localhost:8080") |
||||
|
||||
# Application test |
||||
machine.execute("echo test > /tmp/data.txt") |
||||
machine.succeed("plik --server http://localhost:8080 /tmp/data.txt | grep curl") |
||||
|
||||
machine.succeed("diff data.txt /tmp/data.txt") |
||||
''; |
||||
}) |
@ -0,0 +1,41 @@ |
||||
{ mkDerivation |
||||
, lib |
||||
, extra-cmake-modules |
||||
, kdoctools |
||||
, wrapQtAppsHook |
||||
, qtdeclarative |
||||
, qtgraphicaleffects |
||||
, qtquickcontrols2 |
||||
, kirigami2 |
||||
, kpurpose |
||||
, gst_all_1 |
||||
, pcre |
||||
}: |
||||
|
||||
let |
||||
gst = with gst_all_1; [ gstreamer gst-libav gst-plugins-base gst-plugins-good gst-plugins-bad ]; |
||||
|
||||
in |
||||
mkDerivation { |
||||
pname = "kamoso"; |
||||
nativeBuildInputs = [ extra-cmake-modules kdoctools wrapQtAppsHook ]; |
||||
buildInputs = [ pcre ] ++ gst; |
||||
propagatedBuildInputs = [ |
||||
qtdeclarative |
||||
qtgraphicaleffects |
||||
qtquickcontrols2 |
||||
kirigami2 |
||||
kpurpose |
||||
]; |
||||
|
||||
cmakeFlags = [ |
||||
"-DOpenGL_GL_PREFERENCE=GLVND" |
||||
"-DGSTREAMER_VIDEO_INCLUDE_DIR=${gst_all_1.gst-plugins-base.dev}/include/gstreamer-1.0" |
||||
]; |
||||
|
||||
qtWrapperArgs = [ |
||||
"--prefix GST_PLUGIN_PATH : ${lib.makeSearchPath "lib/gstreamer-1.0" gst}" |
||||
]; |
||||
|
||||
meta.license = with lib.licenses; [ lgpl21Only gpl3Only ]; |
||||
} |
@ -0,0 +1,46 @@ |
||||
{ lib, mkDerivation, fetchFromGitHub, wrapQtAppsHook |
||||
, qmake, qttools, kirigami2, qtquickcontrols2, qtlocation, qtsensors |
||||
, nemo-qml-plugin-dbus, mapbox-gl-qml, s2geometry |
||||
, python3, pyotherside, python3Packages |
||||
}: |
||||
|
||||
mkDerivation rec { |
||||
pname = "pure-maps"; |
||||
version = "2.6.0"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "rinigus"; |
||||
repo = "pure-maps"; |
||||
rev = version; |
||||
sha256 = "1nviq2pavyxwh9k4kyzqpbzmx1wybwdax4pyd017izh9h6gqnjhs"; |
||||
fetchSubmodules = true; |
||||
}; |
||||
|
||||
nativeBuildInputs = [ qmake python3 qttools wrapQtAppsHook ]; |
||||
buildInputs = [ |
||||
kirigami2 qtquickcontrols2 qtlocation qtsensors |
||||
nemo-qml-plugin-dbus pyotherside mapbox-gl-qml s2geometry |
||||
]; |
||||
propagatedBuildInputs = with python3Packages; [ gpxpy pyxdg ]; |
||||
|
||||
postPatch = '' |
||||
substituteInPlace pure-maps.pro \ |
||||
--replace '$$[QT_HOST_BINS]/lconvert' 'lconvert' |
||||
''; |
||||
|
||||
qmakeFlags = [ "FLAVOR=kirigami" ]; |
||||
|
||||
dontWrapQtApps = true; |
||||
postInstall = '' |
||||
wrapQtApp $out/bin/pure-maps \ |
||||
--prefix PYTHONPATH : "$out/share" |
||||
''; |
||||
|
||||
meta = with lib; { |
||||
description = "Display vector and raster maps, places, routes, and provide navigation instructions with a flexible selection of data and service providers"; |
||||
homepage = "https://github.com/rinigus/pure-maps"; |
||||
license = licenses.gpl3Only; |
||||
maintainers = [ maintainers.Thra11 ]; |
||||
platforms = platforms.linux; |
||||
}; |
||||
} |
@ -0,0 +1,24 @@ |
||||
{ lib, buildGoModule, fetchFromGitHub }: |
||||
|
||||
buildGoModule rec { |
||||
pname = "senv"; |
||||
version = "0.5.0"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "SpectralOps"; |
||||
repo = pname; |
||||
rev = "v${version}"; |
||||
sha256 = "014422sdks2xlpsgvynwibz25jg1fj5s8dcf8b1j6djgq5glhfaf"; |
||||
}; |
||||
|
||||
vendorSha256 = "05n55yf75r7i9kl56kw9x6hgmyf5bva5dzp9ni2ws0lb1389grfc"; |
||||
|
||||
subPackages = [ "." ]; |
||||
|
||||
meta = with lib; { |
||||
description = "Friends don't let friends leak secrets on their terminal window"; |
||||
homepage = "https://github.com/SpectralOps/senv"; |
||||
license = licenses.mit; |
||||
maintainers = with maintainers; [ SuperSandro2000 ]; |
||||
}; |
||||
} |
@ -0,0 +1,25 @@ |
||||
{ lib |
||||
, rustPlatform |
||||
, fetchFromGitHub |
||||
}: |
||||
|
||||
rustPlatform.buildRustPackage rec { |
||||
pname = "stork"; |
||||
version = "1.1.0"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "jameslittle230"; |
||||
repo = "stork"; |
||||
rev = "v${version}"; |
||||
sha256 = "sha256-pBJ9n1pQafXagQt9bnj4N1jriczr47QLtKiv+UjWgTg="; |
||||
}; |
||||
|
||||
cargoSha256 = "sha256-u8L4ZeST4ExYB2y8E+I49HCy41dOfhR1fgPpcVMVDuk="; |
||||
|
||||
meta = with lib; { |
||||
description = "Impossibly fast web search, made for static sites"; |
||||
homepage = "https://github.com/jameslittle230/stork"; |
||||
license = with licenses; [ asl20 ]; |
||||
maintainers = with maintainers; [ chuahou ]; |
||||
}; |
||||
} |
@ -0,0 +1,35 @@ |
||||
#!/usr/bin/env nix-shell |
||||
#!nix-shell -i python3 -p python3Packages.feedparser python3Packages.requests |
||||
|
||||
# This script prints the Git commit message for stable channel updates. |
||||
|
||||
import re |
||||
import textwrap |
||||
|
||||
import feedparser |
||||
import requests |
||||
|
||||
feed = feedparser.parse('https://chromereleases.googleblog.com/feeds/posts/default') |
||||
html_tags = re.compile(r'<[^>]+>') |
||||
|
||||
for entry in feed.entries: |
||||
if entry.title != 'Stable Channel Update for Desktop': |
||||
continue |
||||
url = requests.get(entry.link).url.split('?')[0] |
||||
content = entry.content[0].value |
||||
if re.search(r'Linux', content) is None: |
||||
continue |
||||
#print(url) # For debugging purposes |
||||
version = re.search(r'\d+(\.\d+){3}', content).group(0) |
||||
fixes = re.search(r'This update includes .+ security fixes\.', content).group(0) |
||||
fixes = html_tags.sub('', fixes) |
||||
zero_days = re.search(r'Google is aware of reports that .+ in the wild\.', content) |
||||
if zero_days: |
||||
fixes += " " + zero_days.group(0) |
||||
cve_list = re.findall(r'CVE-[^: ]+', content) |
||||
cve_string = ' '.join(cve_list) |
||||
print('chromium: TODO -> ' + version + '\n') |
||||
print(url + '\n') |
||||
print('\n'.join(textwrap.wrap(fixes, width=72)) + '\n') |
||||
print("CVEs:\n" + '\n'.join(textwrap.wrap(cve_string, width=72))) |
||||
break # We only care about the most recent stable channel update |
@ -0,0 +1,11 @@ |
||||
{ callPackage }: |
||||
|
||||
{ |
||||
|
||||
helm-diff = callPackage ./helm-diff.nix {}; |
||||
|
||||
helm-s3 = callPackage ./helm-s3.nix {}; |
||||
|
||||
helm-secrets = callPackage ./helm-secrets.nix {}; |
||||
|
||||
} |
@ -0,0 +1,35 @@ |
||||
{ buildGoModule, fetchFromGitHub, lib }: |
||||
|
||||
buildGoModule rec { |
||||
pname = "helm-diff"; |
||||
version = "3.1.3"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "databus23"; |
||||
repo = pname; |
||||
rev = "v${version}"; |
||||
sha256 = "sha256-h26EOjKNrlcrs2DAYj0NmDRgNRKozjfw5DtxUgHNTa4="; |
||||
}; |
||||
|
||||
vendorSha256 = "sha256-+n/QBuZqtdgUkaBG7iqSuBfljn+AdEzDoIo5SI8ErQA="; |
||||
|
||||
# NOTE: Remove the install and upgrade hooks. |
||||
postPatch = '' |
||||
sed -i '/^hooks:/,+2 d' plugin.yaml |
||||
''; |
||||
|
||||
postInstall = '' |
||||
install -dm755 $out/${pname} |
||||
mv $out/bin $out/${pname}/ |
||||
mv $out/${pname}/bin/{helm-,}diff |
||||
install -m644 -Dt $out/${pname} plugin.yaml |
||||
''; |
||||
|
||||
meta = with lib; { |
||||
description = "A Helm plugin that shows a diff"; |
||||
inherit (src.meta) homepage; |
||||
license = licenses.apsl20; |
||||
maintainers = with maintainers; [ yurrriq ]; |
||||
platforms = platforms.all; |
||||
}; |
||||
} |
@ -0,0 +1,38 @@ |
||||
{ buildGoModule, fetchFromGitHub, lib }: |
||||
|
||||
buildGoModule rec { |
||||
pname = "helm-s3"; |
||||
version = "0.10.0"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "hypnoglow"; |
||||
repo = pname; |
||||
rev = "v${version}"; |
||||
sha256 = "sha256-2BQ/qtoL+iFbuLvrJGUuxWFKg9u1sVDRcRm2/S0mgyc="; |
||||
}; |
||||
|
||||
vendorSha256 = "sha256-/9TiY0XdkiNxW5JYeC5WD9hqySCyYYU8lB+Ft5Vm96I="; |
||||
|
||||
# NOTE: Remove the install and upgrade hooks. |
||||
postPatch = '' |
||||
sed -i '/^hooks:/,+2 d' plugin.yaml |
||||
''; |
||||
|
||||
checkPhase = '' |
||||
make test-unit |
||||
''; |
||||
|
||||
postInstall = '' |
||||
install -dm755 $out/${pname} |
||||
mv $out/bin $out/${pname}/ |
||||
install -m644 -Dt $out/${pname} plugin.yaml |
||||
''; |
||||
|
||||
meta = with lib; { |
||||
description = "A Helm plugin that shows a diff"; |
||||
inherit (src.meta) homepage; |
||||
license = licenses.apsl20; |
||||
maintainers = with maintainers; [ yurrriq ]; |
||||
platforms = platforms.all; |
||||
}; |
||||
} |
@ -0,0 +1,44 @@ |
||||
{ lib, stdenv, fetchFromGitHub, makeWrapper, coreutils, findutils, getopt, gnugrep, gnused, sops, vault }: |
||||
|
||||
stdenv.mkDerivation rec { |
||||
pname = "helm-secrets"; |
||||
version = "3.4.1"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "jkroepke"; |
||||
repo = pname; |
||||
rev = "v${version}"; |
||||
sha256 = "sha256-EXCr0QjupsBBKTm6Opw5bcNwAD4FGGyOiqaa8L91/OI="; |
||||
}; |
||||
|
||||
nativeBuildInputs = [ makeWrapper ]; |
||||
buildInputs = [ getopt sops ]; |
||||
|
||||
# NOTE: helm-secrets is comprised of shell scripts. |
||||
dontBuild = true; |
||||
|
||||
# NOTE: Remove the install and upgrade hooks. |
||||
postPatch = '' |
||||
sed -i '/^hooks:/,+2 d' plugin.yaml |
||||
''; |
||||
|
||||
installPhase = '' |
||||
runHook preInstall |
||||
|
||||
install -dm755 $out/${pname} $out/${pname}/scripts |
||||
install -m644 -Dt $out/${pname} plugin.yaml |
||||
cp -r scripts/* $out/${pname}/scripts |
||||
wrapProgram $out/${pname}/scripts/run.sh \ |
||||
--prefix PATH : ${lib.makeBinPath [ coreutils findutils getopt gnugrep gnused sops vault ]} |
||||
|
||||
runHook postInstall |
||||
''; |
||||
|
||||
meta = with lib; { |
||||
description = "A Helm plugin that helps manage secrets"; |
||||
inherit (src.meta) homepage; |
||||
license = licenses.apsl20; |
||||
maintainers = with maintainers; [ yurrriq ]; |
||||
platforms = platforms.all; |
||||
}; |
||||
} |
@ -0,0 +1,48 @@ |
||||
{ stdenv, symlinkJoin, lib, makeWrapper |
||||
, writeText |
||||
}: |
||||
|
||||
helm: |
||||
|
||||
let |
||||
wrapper = { |
||||
plugins ? [], |
||||
extraMakeWrapperArgs ? "" |
||||
}: |
||||
let |
||||
|
||||
initialMakeWrapperArgs = [ |
||||
"${helm}/bin/helm" "${placeholder "out"}/bin/helm" |
||||
"--argv0" "$0" "--set" "HELM_PLUGINS" "${pluginsDir}" |
||||
]; |
||||
|
||||
pluginsDir = symlinkJoin { |
||||
name = "helm-plugins"; |
||||
paths = plugins; |
||||
}; |
||||
in |
||||
symlinkJoin { |
||||
name = "helm-${lib.getVersion helm}"; |
||||
|
||||
# Remove the symlinks created by symlinkJoin which we need to perform |
||||
# extra actions upon |
||||
postBuild = '' |
||||
rm $out/bin/helm |
||||
makeWrapper ${lib.escapeShellArgs initialMakeWrapperArgs} ${extraMakeWrapperArgs} |
||||
''; |
||||
paths = [ helm pluginsDir ]; |
||||
|
||||
preferLocalBuild = true; |
||||
|
||||
nativeBuildInputs = [ makeWrapper ]; |
||||
passthru = { unwrapped = helm; }; |
||||
|
||||
meta = helm.meta // { |
||||
# To prevent builds on hydra |
||||
hydraPlatforms = []; |
||||
# prefer wrapper over the package |
||||
priority = (helm.meta.priority or 0) - 1; |
||||
}; |
||||
}; |
||||
in |
||||
lib.makeOverridable wrapper |
@ -0,0 +1,74 @@ |
||||
{ lib |
||||
, mkDerivation |
||||
, fetchFromGitHub |
||||
, cmake |
||||
, pkg-config |
||||
, doxygen |
||||
, wrapQtAppsHook |
||||
, pcre |
||||
, poco |
||||
, qtbase |
||||
, qtsvg |
||||
, libsForQt5 |
||||
, nlohmann_json |
||||
, soapysdr-with-plugins |
||||
, portaudio |
||||
, alsaLib |
||||
, muparserx |
||||
, python3 |
||||
}: |
||||
|
||||
mkDerivation rec { |
||||
pname = "pothos"; |
||||
version = "0.7.1"; |
||||
|
||||
src = fetchFromGitHub { |
||||
owner = "pothosware"; |
||||
repo = "PothosCore"; |
||||
rev = "pothos-${version}"; |
||||
sha256 = "038c3ipvf4sgj0zhm3vcj07ymsva4ds6v89y43f5d3p4n8zc2rsg"; |
||||
fetchSubmodules = true; |
||||
}; |
||||
|
||||
patches = [ |
||||
# spuce's CMakeLists.txt uses QT5_USE_Modules, which does not seem to work on Nix |
||||
./spuce.patch |
||||
]; |
||||
|
||||
nativeBuildInputs = [ cmake pkg-config doxygen wrapQtAppsHook ]; |
||||
|
||||
buildInputs = [ |
||||
pcre poco qtbase qtsvg libsForQt5.qwt nlohmann_json |
||||
soapysdr-with-plugins portaudio alsaLib muparserx python3 |
||||
]; |
||||
|
||||
postInstall = '' |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow.desktop $out/share/applications/pothos-flow.desktop |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow-16.png $out/share/icons/hicolor/16x16/apps/pothos-flow.png |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow-22.png $out/share/icons/hicolor/22x22/apps/pothos-flow.png |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow-32.png $out/share/icons/hicolor/32x32/apps/pothos-flow.png |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow-48.png $out/share/icons/hicolor/48x48/apps/pothos-flow.png |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow-64.png $out/share/icons/hicolor/64x64/apps/pothos-flow.png |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow-128.png $out/share/icons/hicolor/128x128/apps/pothos-flow.png |
||||
install -Dm644 $out/share/Pothos/Desktop/pothos-flow.xml $out/share/mime/application/pothos-flow.xml |
||||
rm -r $out/share/Pothos/Desktop |
||||
''; |
||||
|
||||
dontWrapQtApps = true; |
||||
preFixup = '' |
||||
# PothosUtil does not need to be wrapped |
||||
wrapQtApp $out/bin/PothosFlow |
||||
wrapQtApp $out/bin/spuce_fir_plot |
||||
wrapQtApp $out/bin/spuce_iir_plot |
||||
wrapQtApp $out/bin/spuce_other_plot |
||||
wrapQtApp $out/bin/spuce_window_plot |
||||
''; |
||||
|
||||
meta = with lib; { |
||||
description = "The Pothos data-flow framework"; |
||||
homepage = "https://github.com/pothosware/PothosCore/wiki"; |
||||
license = licenses.boost; |
||||
platforms = platforms.linux; |
||||
maintainers = with maintainers; [ eduardosm ]; |
||||
}; |
||||
} |
@ -0,0 +1,101 @@ |
||||
diff --git a/spuce/qt_fir/CMakeLists.txt b/spuce/qt_fir/CMakeLists.txt
|
||||
index fa2e580..e32113c 100644
|
||||
--- a/spuce/qt_fir/CMakeLists.txt
|
||||
+++ b/spuce/qt_fir/CMakeLists.txt
|
||||
@@ -6,7 +6,7 @@ Message("Project spuce fir_plot")
|
||||
set(CMAKE_INCLUDE_CURRENT_DIR ON)
|
||||
set(CMAKE_AUTOMOC ON)
|
||||
|
||||
-FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets)
|
||||
+FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets PrintSupport)
|
||||
|
||||
set(SOURCES
|
||||
make_filter.cpp
|
||||
@@ -27,11 +27,7 @@ set_property(TARGET spuce_fir PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
set_property(TARGET spuce_fir_plot PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
set_property(TARGET spuce_fir_plot PROPERTY CXX_STANDARD 11)
|
||||
|
||||
-TARGET_LINK_LIBRARIES(spuce_fir_plot spuce_fir ${QT_LIBRARIES} spuce)
|
||||
-QT5_USE_Modules(spuce_fir_plot Gui)
|
||||
-QT5_USE_Modules(spuce_fir_plot Core)
|
||||
-QT5_USE_Modules(spuce_fir_plot Widgets)
|
||||
-QT5_USE_Modules(spuce_fir_plot PrintSupport)
|
||||
+TARGET_LINK_LIBRARIES(spuce_fir_plot spuce_fir ${QT_LIBRARIES} spuce Qt::Gui Qt::Core Qt::Widgets Qt::PrintSupport)
|
||||
|
||||
INSTALL(TARGETS spuce_fir_plot DESTINATION bin)
|
||||
|
||||
diff --git a/spuce/qt_iir/CMakeLists.txt b/spuce/qt_iir/CMakeLists.txt
|
||||
index 4717226..debb5f9 100644
|
||||
--- a/spuce/qt_iir/CMakeLists.txt
|
||||
+++ b/spuce/qt_iir/CMakeLists.txt
|
||||
@@ -6,7 +6,7 @@ Message("Project spuce iir_plot")
|
||||
set(CMAKE_INCLUDE_CURRENT_DIR ON)
|
||||
set(CMAKE_AUTOMOC ON)
|
||||
|
||||
-FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets)
|
||||
+FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets PrintSupport)
|
||||
|
||||
set(SOURCES
|
||||
make_filter.cpp
|
||||
@@ -27,10 +27,6 @@ set_property(TARGET spuce_iir PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
set_property(TARGET spuce_iir_plot PROPERTY CXX_STANDARD 11)
|
||||
set_property(TARGET spuce_iir_plot PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
|
||||
-TARGET_LINK_LIBRARIES(spuce_iir_plot spuce_iir ${QT_LIBRARIES} spuce)
|
||||
-QT5_USE_Modules(spuce_iir_plot Gui)
|
||||
-QT5_USE_Modules(spuce_iir_plot Core)
|
||||
-QT5_USE_Modules(spuce_iir_plot Widgets)
|
||||
-QT5_USE_Modules(spuce_iir_plot PrintSupport)
|
||||
+TARGET_LINK_LIBRARIES(spuce_iir_plot spuce_iir ${QT_LIBRARIES} spuce Qt::Gui Qt::Core Qt::Widgets Qt::PrintSupport)
|
||||
|
||||
INSTALL(TARGETS spuce_iir_plot DESTINATION bin)
|
||||
diff --git a/spuce/qt_other/CMakeLists.txt b/spuce/qt_other/CMakeLists.txt
|
||||
index 29c270d..e1ed778 100644
|
||||
--- a/spuce/qt_other/CMakeLists.txt
|
||||
+++ b/spuce/qt_other/CMakeLists.txt
|
||||
@@ -6,7 +6,7 @@ Message("Project spuce window_plot")
|
||||
set(CMAKE_INCLUDE_CURRENT_DIR ON)
|
||||
set(CMAKE_AUTOMOC ON)
|
||||
|
||||
-FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets)
|
||||
+FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets PrintSupport)
|
||||
|
||||
set(SOURCES make_filter.cpp)
|
||||
ADD_LIBRARY(spuce_other STATIC ${SOURCES})
|
||||
@@ -23,10 +23,6 @@ ADD_EXECUTABLE(spuce_other_plot ${other_plot_SOURCES} ${other_plot_HEADERS_MOC})
|
||||
set_property(TARGET spuce_other_plot PROPERTY CXX_STANDARD 11)
|
||||
set_property(TARGET spuce_other_plot PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
|
||||
-TARGET_LINK_LIBRARIES(spuce_other_plot spuce_other ${QT_LIBRARIES} spuce)
|
||||
-QT5_USE_Modules(spuce_other_plot Gui)
|
||||
-QT5_USE_Modules(spuce_other_plot Core)
|
||||
-QT5_USE_Modules(spuce_other_plot Widgets)
|
||||
-QT5_USE_Modules(spuce_other_plot PrintSupport)
|
||||
+TARGET_LINK_LIBRARIES(spuce_other_plot spuce_other ${QT_LIBRARIES} spuce Qt::Gui Qt::Core Qt::Widgets Qt::PrintSupport)
|
||||
|
||||
INSTALL(TARGETS spuce_other_plot DESTINATION bin)
|
||||
diff --git a/spuce/qt_window/CMakeLists.txt b/spuce/qt_window/CMakeLists.txt
|
||||
index e95c85b..4a77ab8 100644
|
||||
--- a/spuce/qt_window/CMakeLists.txt
|
||||
+++ b/spuce/qt_window/CMakeLists.txt
|
||||
@@ -6,7 +6,7 @@ Message("Project spuce window_plot")
|
||||
set(CMAKE_INCLUDE_CURRENT_DIR ON)
|
||||
set(CMAKE_AUTOMOC ON)
|
||||
|
||||
-FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets)
|
||||
+FIND_PACKAGE(Qt5 REQUIRED Gui Core Widgets PrintSupport)
|
||||
|
||||
set(SOURCES make_filter.cpp)
|
||||
|
||||
@@ -25,10 +25,6 @@ set_property(TARGET spuce_window_plot PROPERTY CXX_STANDARD 11)
|
||||
set_property(TARGET spuce_win PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
set_property(TARGET spuce_window_plot PROPERTY POSITION_INDEPENDENT_CODE TRUE)
|
||||
|
||||
-TARGET_LINK_LIBRARIES(spuce_window_plot spuce_win ${QT_LIBRARIES} spuce)
|
||||
-QT5_USE_Modules(spuce_window_plot Gui)
|
||||
-QT5_USE_Modules(spuce_window_plot Core)
|
||||
-QT5_USE_Modules(spuce_window_plot Widgets)
|
||||
-QT5_USE_Modules(spuce_window_plot PrintSupport)
|
||||
+TARGET_LINK_LIBRARIES(spuce_window_plot spuce_win ${QT_LIBRARIES} spuce Qt::Gui Qt::Core Qt::Widgets Qt::PrintSupport)
|
||||
|
||||
INSTALL(TARGETS spuce_window_plot DESTINATION bin)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue