Skip to content

Commit ff4320f

Browse files
committed
change build enviroment to be manylinux2014 compatible
Set up symlink for devtoolset-8 Combine Docker GCR presubmits and also push main to gcr Commit missed files Log in to GCR Fix conditional, hopefully Clarify Add Python 3.10 support (#58) Adds Python 3.10 support to the containers. Python 3.10 changes some library behavior and, for now, needs an alternative installation method to work. Upgrade gcrpio for fast build and cleanup setup Add utilities for running release tests (#56) This adds the dependencies and notably bazelrc config options to run TensorFlow's Nightly and Release tests, which I've been working on replicating on internal CI. I still have documentation and migration work to do, but the major portion of the support work is here. add gdb to the system packages change to gcc 8.3.1 from centos7 for devtoolset8 fix libstdc++ symlink in devtoolset-8 environment Undo ignoring other xml files Update README Deduplicate repeated messages Squash long runfiles paths Lock nvidia driver to 460 libtensorflow work Fix libtensorflow script and start prelim check Update Test Requirements to have same versions as tf_sig_build_dockerfiles/devel.requirements.txt (#65) * Add additional gitignore files * Update requirements with same versions Keep versions consistent with tf_sig_build_dockerfiles/devel.requirements.txt Cleanup Fix Build issue from `python_include` (#67) * Remove Python 3.10 pip special handling * Link usr/include to usr/local/include * Update location of python include * Update setup.python.sh Assorted changes -- see details - Remove installation of nvidia-profiler, which depends on libcuda1, which ultimately installs an nvidia driver package, which we don't want because we're running in docker, in which the drivers are mounted. I hope nvidia-profiler isn't necessary for anything important; otherwise we'll need to synchronize driver versions between the containers and VM images. - Add less, colordiff and a newer version of clang-format - Add code_check_changed_files, which is intended to replace the "incremental" parts of ci_sanity. Still a work in progress because we need to decide on valuable configurations (clang-format and pylint cannot be run the same way as we have them configured internally and currently have a lot of findings) - Add code_check_full, which is intended to replace the "across entire code base" parts of ci_sanity. I rewrote many of the clunkier tests. Still a work in progress because we must verify that the changed tests will still fail. - Fix bad "bazel test " expansion for libtensorflow - Fix bad chmod for libtensoflow repacker Change libtensorflow config values to fix target selection Fix a typo in venv installation (Thanks to reedwm) Remove extra lines (Thanks again to reedwm) Clarify ctrl-s warning Correctly remove extra test filters Make it possible to run isolated pip tests More work on code checks Fix a typo Clean up code check full Remove clang-format Cleanup changed_files and move one to full Add a missing test Clean up and fix code_check_full Update docs and create experimental RBE configs Update docs and create experimental RBE configs Update dependencies to 2.9.0.dev Update Go API installation guide for TensorFlow 2.8.0 (#74) Clarify usage of nightly commit Fix mistaken 'test' command Update docs and create experimental RBE configs Update docs and create experimental RBE configs Update dependencies to 2.9.0.dev Update Go API installation guide for TensorFlow 2.8.0 (#74) Clarify usage of nightly commit Fix mistaken 'test' command change to devtoolset-9 and gcc 9.3.1 for manylinux2014 change cachebuster value for ml2014 remote cache change to new libstdcxx abi for devtoolset-9 change cachebuster value to use the new libstdcxx abi link against nonshared44 in devtoolset-9 update the cachebuster value change CACHEBUSTER value for gpu builds remove redudant commands during build environment setup change cachbuster variable name for gpu builds store manylinux2014 cache in a different location
1 parent 428ab65 commit ff4320f

File tree

8 files changed

+102
-40
lines changed

8 files changed

+102
-40
lines changed

tf_sig_build_dockerfiles/Dockerfile

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,23 @@ COPY setup.packages.sh setup.packages.sh
77
COPY builder.packages.txt builder.packages.txt
88
RUN /setup.packages.sh /builder.packages.txt
99

10-
# Install devtoolset-7 in /dt7 with gclibc 2.12 and libstdc++ 4.4, for building
10+
# Install devtoolset-7 in /dt7 with glibc 2.12 and libstdc++ 4.4, for building
1111
# manylinux2010-compatible packages. Scripts expect to be in the root directory.
1212
COPY builder.devtoolset/fixlinks.sh /fixlinks.sh
1313
COPY builder.devtoolset/rpm-patch.sh /rpm-patch.sh
1414
COPY builder.devtoolset/build_devtoolset.sh /build_devtoolset.sh
1515
RUN /build_devtoolset.sh devtoolset-7 /dt7
1616

17+
# Install devtoolset-9 in /dt9 with glibc 2.17 and libstdc++ 4.8, for building
18+
# manylinux2014-compatible packages.
19+
RUN /build_devtoolset.sh devtoolset-9 /dt9
20+
1721
################################################################################
1822
FROM nvidia/cuda:11.2.2-base-ubuntu20.04 as devel
1923
################################################################################
2024

2125
COPY --from=builder /dt7 /dt7
26+
COPY --from=builder /dt9 /dt9
2227

2328
# Install required development packages but delete unneeded CUDA bloat
2429
# CUDA must be cleaned up in the same command to prevent Docker layer bloating

tf_sig_build_dockerfiles/builder.devtoolset/build_devtoolset.sh

Lines changed: 80 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -15,34 +15,60 @@
1515
# ==============================================================================
1616
#
1717
# Builds a devtoolset cross-compiler targeting manylinux 2010 (glibc 2.12 /
18-
# libstdc++ 4.4).
18+
# libstdc++ 4.4) or manylinux2014 (glibc 2.17 / libstdc++ 4.8).
1919

2020
VERSION="$1"
2121
TARGET="$2"
2222

2323
case "${VERSION}" in
2424
devtoolset-7)
2525
LIBSTDCXX_VERSION="6.0.24"
26+
LIBSTDCXX_ABI="gcc4-compatible"
2627
;;
27-
devtoolset-8)
28-
LIBSTDCXX_VERSION="6.0.25"
28+
devtoolset-9)
29+
LIBSTDCXX_VERSION="6.0.28"
30+
LIBSTDCXX_ABI="new"
2931
;;
3032
*)
31-
echo "Usage: $0 {devtoolset-7|devtoolset-8} <target-directory>"
33+
echo "Usage: $0 {devtoolset-7|devtoolset-9} <target-directory>"
34+
echo "Use 'devtoolset-7' to build a manylinux2010 compatible toolchain or 'devtoolset-9' to build a manylinux2014 compatible toolchain"
3235
exit 1
3336
;;
3437
esac
3538

3639
mkdir -p "${TARGET}"
37-
# Download binary glibc 2.12 release.
38-
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6_2.12.1-0ubuntu6_amd64.deb" && \
39-
unar "libc6_2.12.1-0ubuntu6_amd64.deb" && \
40-
tar -C "${TARGET}" -xvzf "libc6_2.12.1-0ubuntu6_amd64/data.tar.gz" && \
41-
rm -rf "libc6_2.12.1-0ubuntu6_amd64.deb" "libc6_2.12.1-0ubuntu6_amd64"
42-
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \
43-
unar "libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \
44-
tar -C "${TARGET}" -xvzf "libc6-dev_2.12.1-0ubuntu6_amd64/data.tar.gz" && \
45-
rm -rf "libc6-dev_2.12.1-0ubuntu6_amd64.deb" "libc6-dev_2.12.1-0ubuntu6_amd64"
40+
41+
# Download glibc's shared and development libraries based on the value of the
42+
# `VERSION` parameter.
43+
# Note: 'Templatizing' this and the other conditional branches would require
44+
# defining several variables (version, os, path) making it difficult to maintain
45+
# and extend for future modifications.
46+
case "${VERSION}" in
47+
devtoolset-7)
48+
# Download binary glibc 2.12 shared library release.
49+
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6_2.12.1-0ubuntu6_amd64.deb" && \
50+
unar "libc6_2.12.1-0ubuntu6_amd64.deb" && \
51+
tar -C "${TARGET}" -xvzf "libc6_2.12.1-0ubuntu6_amd64/data.tar.gz" && \
52+
rm -rf "libc6_2.12.1-0ubuntu6_amd64.deb" "libc6_2.12.1-0ubuntu6_amd64"
53+
# Download binary glibc 2.12 development library release.
54+
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \
55+
unar "libc6-dev_2.12.1-0ubuntu6_amd64.deb" && \
56+
tar -C "${TARGET}" -xvzf "libc6-dev_2.12.1-0ubuntu6_amd64/data.tar.gz" && \
57+
rm -rf "libc6-dev_2.12.1-0ubuntu6_amd64.deb" "libc6-dev_2.12.1-0ubuntu6_amd64"
58+
;;
59+
devtoolset-9)
60+
# Download binary glibc 2.17 shared library release.
61+
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6_2.17-0ubuntu5.1_amd64.deb" && \
62+
unar "libc6_2.17-0ubuntu5.1_amd64.deb" && \
63+
tar -C "${TARGET}" -xvzf "libc6_2.17-0ubuntu5.1_amd64/data.tar.gz" && \
64+
rm -rf "libc6_2.17-0ubuntu5.1_amd64.deb" "libc6_2.17-0ubuntu5.1_amd64"
65+
# Download binary glibc 2.17 development library release.
66+
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/e/eglibc/libc6-dev_2.17-0ubuntu5.1_amd64.deb" && \
67+
unar "libc6-dev_2.17-0ubuntu5.1_amd64.deb" && \
68+
tar -C "${TARGET}" -xvzf "libc6-dev_2.17-0ubuntu5.1_amd64/data.tar.gz" && \
69+
rm -rf "libc6-dev_2.17-0ubuntu5.1_amd64.deb" "libc6-dev_2.17-0ubuntu5.1_amd64"
70+
;;
71+
esac
4672

4773
# Put the current kernel headers from ubuntu in place.
4874
ln -s "/usr/include/linux" "/${TARGET}/usr/include/linux"
@@ -56,29 +82,41 @@ ln -s "/usr/include/x86_64-linux-gnu/asm" "/${TARGET}/usr/include/asm"
5682
# Patch to allow non-glibc 2.12 compatible builds to work.
5783
sed -i '54i#define TCP_USER_TIMEOUT 18' "/${TARGET}/usr/include/netinet/tcp.h"
5884

59-
# Download binary libstdc++ 4.4 release we are going to link against.
60-
# We only need the shared library, as we're going to develop against the
61-
# libstdc++ provided by devtoolset.
62-
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/g/gcc-4.4/libstdc++6_4.4.3-4ubuntu5_amd64.deb" && \
63-
unar "libstdc++6_4.4.3-4ubuntu5_amd64.deb" && \
64-
tar -C "/${TARGET}" -xvzf "libstdc++6_4.4.3-4ubuntu5_amd64/data.tar.gz" "./usr/lib/libstdc++.so.6.0.13" && \
65-
rm -rf "libstdc++6_4.4.3-4ubuntu5_amd64.deb" "libstdc++6_4.4.3-4ubuntu5_amd64"
85+
# Download specific version of libstdc++ shared library based on the value of
86+
# the `VERSION` parameter
87+
case "${VERSION}" in
88+
devtoolset-7)
89+
# Download binary libstdc++ 4.4 release we are going to link against.
90+
# We only need the shared library, as we're going to develop against the
91+
# libstdc++ provided by devtoolset.
92+
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/g/gcc-4.4/libstdc++6_4.4.3-4ubuntu5_amd64.deb" && \
93+
unar "libstdc++6_4.4.3-4ubuntu5_amd64.deb" && \
94+
tar -C "/${TARGET}" -xvzf "libstdc++6_4.4.3-4ubuntu5_amd64/data.tar.gz" "./usr/lib/libstdc++.so.6.0.13" && \
95+
rm -rf "libstdc++6_4.4.3-4ubuntu5_amd64.deb" "libstdc++6_4.4.3-4ubuntu5_amd64"
96+
;;
97+
devtoolset-9)
98+
# Download binary libstdc++ 4.8 shared library release
99+
wget "http://old-releases.ubuntu.com/ubuntu/pool/main/g/gcc-4.8/libstdc++6_4.8.1-10ubuntu8_amd64.deb" && \
100+
unar "libstdc++6_4.8.1-10ubuntu8_amd64.deb" && \
101+
tar -C "/${TARGET}" -xvzf "libstdc++6_4.8.1-10ubuntu8_amd64/data.tar.gz" "./usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.18" && \
102+
rm -rf "libstdc++6_4.8.1-10ubuntu8_amd64.deb" "libstdc++6_4.8.1-10ubuntu8_amd64"
103+
;;
104+
esac
66105

67106
mkdir -p "${TARGET}-src"
68107
cd "${TARGET}-src"
69108

70-
# Build a devtoolset cross-compiler based on our glibc 2.12 sysroot setup.
71-
109+
# Build a devtoolset cross-compiler based on our glibc 2.12/glibc 2.17 sysroot setup.
72110
case "${VERSION}" in
73111
devtoolset-7)
74112
wget "http://vault.centos.org/centos/6/sclo/Source/rh/devtoolset-7/devtoolset-7-gcc-7.3.1-5.15.el6.src.rpm"
75113
rpm2cpio "devtoolset-7-gcc-7.3.1-5.15.el6.src.rpm" |cpio -idmv
76114
tar -xvjf "gcc-7.3.1-20180303.tar.bz2" --strip 1
77115
;;
78-
devtoolset-8)
79-
wget "http://vault.centos.org/centos/6/sclo/Source/rh/devtoolset-8/devtoolset-8-gcc-8.2.1-3.el6.src.rpm"
80-
rpm2cpio "devtoolset-8-gcc-8.2.1-3.el6.src.rpm" |cpio -idmv
81-
tar -xvf "gcc-8.2.1-20180905.tar.xz" --strip 1
116+
devtoolset-9)
117+
wget "https://vault.centos.org/centos/7/sclo/Source/rh/devtoolset-9-gcc-9.3.1-2.2.el7.src.rpm"
118+
rpm2cpio "devtoolset-9-gcc-9.3.1-2.2.el7.src.rpm" |cpio -idmv
119+
tar -xvf "gcc-9.3.1-20200408.tar.xz" --strip 1
82120
;;
83121
esac
84122

@@ -109,22 +147,37 @@ cd "${TARGET}-build"
109147
--enable-plugin \
110148
--enable-shared \
111149
--enable-threads=posix \
112-
--with-default-libstdcxx-abi="gcc4-compatible" \
150+
--with-default-libstdcxx-abi=${LIBSTDCXX_ABI} \
113151
--with-gcc-major-version-only \
114152
--with-linker-hash-style="gnu" \
115153
--with-tune="generic" \
116154
&& \
117155
make -j 42 && \
118156
make install
119157

158+
120159
# Create the devtoolset libstdc++ linkerscript that links dynamically against
121160
# the system libstdc++ 4.4 and provides all other symbols statically.
161+
case "${VERSION}" in
162+
devtoolset-7)
122163
mv "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}" \
123164
"/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}.backup"
124165
echo -e "OUTPUT_FORMAT(elf64-x86-64)\nINPUT ( libstdc++.so.6.0.13 -lstdc++_nonshared44 )" \
125166
> "/${TARGET}/usr/lib/libstdc++.so.${LIBSTDCXX_VERSION}"
126167
cp "./x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++_nonshared44.a" \
127168
"/${TARGET}/usr/lib"
169+
;;
170+
devtoolset-9)
171+
# Note that the installation path for libstdc++ here is /${TARGET}/usr/lib64/
172+
mv "/${TARGET}/usr/lib64/libstdc++.so.${LIBSTDCXX_VERSION}" \
173+
"/${TARGET}/usr/lib64/libstdc++.so.${LIBSTDCXX_VERSION}.backup"
174+
echo -e "OUTPUT_FORMAT(elf64-x86-64)\nINPUT ( libstdc++.so.6.0.18 -lstdc++_nonshared44 )" \
175+
> "/${TARGET}/usr/lib64/libstdc++.so.${LIBSTDCXX_VERSION}"
176+
cp "./x86_64-pc-linux-gnu/libstdc++-v3/src/.libs/libstdc++_nonshared44.a" \
177+
"/${TARGET}/usr/lib64"
178+
;;
179+
esac
180+
128181

129182
# Link in architecture specific includes from the system; note that we cannot
130183
# link in the whole x86_64-linux-gnu folder, as otherwise we're overlaying
@@ -136,4 +189,3 @@ PYTHON_VERSIONS=("python3.7m" "python3.8" "python3.9" "python3.10")
136189
for v in "${PYTHON_VERSIONS[@]}"; do
137190
ln -s "/usr/local/include/${v}" "/${TARGET}/usr/include/x86_64-linux-gnu/${v}"
138191
done
139-

tf_sig_build_dockerfiles/devel.packages.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@ clang-format-12
3030
colordiff
3131
curl
3232
ffmpeg
33+
gdb
3334
git
3435
jq
3536
less

tf_sig_build_dockerfiles/devel.usertools/cpu.bazelrc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@ build:sigbuild_local_cache --disk_cache=/tf/cache
66
# Use the public-access TF DevInfra cache (read only)
77
build:sigbuild_remote_cache --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --remote_upload_local_results=false
88
# Write to the TF DevInfra cache (only works for internal TF CI)
9-
build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --google_default_credentials
9+
build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache/manylinux2014" --google_default_credentials
1010
# Change the value of CACHEBUSTER when upgrading the toolchain, or when testing
1111
# different compilation methods. E.g. for a PR to test a new CUDA version, set
1212
# the CACHEBUSTER to the PR number.
13-
build --action_env=CACHEBUSTER=r2.9
13+
build --action_env=CACHEBUSTER=r2.9_pr57
1414

1515
# Use Python 3.X as installed in container image
1616
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
@@ -36,8 +36,8 @@ build --copt=-mavx --host_copt=-mavx
3636
# See https://docs.bazel.build/versions/main/skylark/performance.html#performance-profiling
3737
build --profile=/tf/pkg/profile.json
3838

39-
# Use the NVCC toolchain to compile for manylinux2010
40-
build --crosstool_top=@sigbuild-r2.9_config_cuda//crosstool:toolchain
39+
# Use the NVCC toolchain to compile for manylinux2014
40+
build --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
4141

4242
# Test-related settings below this point.
4343
test --build_tests_only --keep_going --test_output=errors --verbose_failures=true

tf_sig_build_dockerfiles/devel.usertools/gpu.bazelrc

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,11 +6,11 @@ build:sigbuild_local_cache --disk_cache=/tf/cache
66
# Use the public-access TF DevInfra cache (read only)
77
build:sigbuild_remote_cache --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --remote_upload_local_results=false
88
# Write to the TF DevInfra cache (only works for internal TF CI)
9-
build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache" --google_default_credentials
9+
build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/tensorflow-devinfra-bazel-cache/manylinux2014" --google_default_credentials
1010
# Change the value of CACHEBUSTER when upgrading the toolchain, or when testing
1111
# different compilation methods. E.g. for a PR to test a new CUDA version, set
1212
# the CACHEBUSTER to the PR number.
13-
build --action_env=CACHEBUSTER=r2.9
13+
build --action_env=CACHEBUSTER=r2.9_pr57
1414

1515
# Use Python 3.X as installed in container image
1616
build --action_env PYTHON_BIN_PATH="/usr/bin/python3"
@@ -42,9 +42,9 @@ build --repo_env TF_NEED_CUDA=1
4242
build --action_env=TF_CUDA_VERSION="11"
4343
build --action_env=TF_CUDNN_VERSION="8"
4444
build --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2"
45-
build --action_env=GCC_HOST_COMPILER_PATH="/dt7/usr/bin/gcc"
45+
build --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc"
4646
build --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib"
47-
build --crosstool_top=@sigbuild-r2.9_config_cuda//crosstool:toolchain
47+
build --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
4848

4949
# CUDA: Enable TensorRT optimizations
5050
# https://developer.nvidia.com/tensorrt

tf_sig_build_dockerfiles/devel.usertools/rename_and_verify_wheels.sh

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ set -euxo pipefail
66

77
for wheel in /tf/pkg/*.whl; do
88
echo "Checking and renaming $wheel..."
9-
time python3 -m auditwheel repair --plat manylinux2010_x86_64 "$wheel" --wheel-dir /tf/pkg 2>&1 | tee check.txt
9+
time python3 -m auditwheel repair --plat manylinux2014_x86_64 "$wheel" --wheel-dir /tf/pkg 2>&1 | tee check.txt
1010

1111
# We don't need the original wheel if it was renamed
1212
new_wheel=$(grep --extended-regexp --only-matching '/tf/pkg/\S+.whl' check.txt)

tf_sig_build_dockerfiles/devel.usertools/wheel_verification.bats

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,9 @@ teardown_file() {
1212
rm -rf /tf/venv
1313
}
1414

15-
@test "Wheel is manylinux2010 (manylinux_2_12) compliant" {
15+
@test "Wheel is manylinux2014 (manylinux_2_17) compliant" {
1616
python3 -m auditwheel show "$TF_WHEEL" > audit.txt
17-
grep --quiet 'This constrains the platform tag to "manylinux_2_12_x86_64"' audit.txt
17+
grep --quiet 'This constrains the platform tag to "manylinux_2_17_x86_64"' audit.txt
1818
}
1919

2020
@test "Wheel conforms to upstream size limitations" {

tf_sig_build_dockerfiles/setup.python.sh

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,12 @@ EOF
2121
# for any Python version present
2222
pushd /usr/include/x86_64-linux-gnu
2323
for f in $(ls | grep python); do
24+
# set up symlink for devtoolset-7
2425
rm -f /dt7/usr/include/x86_64-linux-gnu/$f
2526
ln -s /usr/include/x86_64-linux-gnu/$f /dt7/usr/include/x86_64-linux-gnu/$f
27+
# set up symlink for devtoolset-8
28+
rm -f /dt9/usr/include/x86_64-linux-gnu/$f
29+
ln -s /usr/include/x86_64-linux-gnu/$f /dt9/usr/include/x86_64-linux-gnu/$f
2630
done
2731
popd
2832

0 commit comments

Comments
 (0)