Upload a tar file with libcurl-impersonate compiled for Ubuntu and macOS
automatically for each GitHub release. Previously only a statically
compiled version of the binary curl-impersonate was uploaded.
Build on macOS is failing due to unknown compiler options previously
added to support gcc 12 on Fedora. Add "-Wno-unknown-warning-option" to
silence this failure.
HTTP/2 includes various settings pertaining to stream priorities. Chrome
and Firefox handle them differently, and this behavior was not mimicked
in curl-impersonate well. With this commit, the stream settings set
by curl-impersonate are identical to the real browsers.
* With Chrome, the default stream weight is 256 and the "exclusive bit"
is set to ON.
* With Firefox, a complex tree of stream dependencies is created
by default using PRIORITY frames. This behavior is now mimicked by
curl-impersonate.
* Add compiler flags to the boringssl build step to suppress gcc errors
raised when compiling with gcc 12+, which is present in Fedora. It is not
clear whether this is a compiler bug or a boringssl bug, however the
errors only appear when building in release mode and with gcc 12, so I
don't see any problem with manually supressing them.
* Add documentation for building on Fedora.
Tweak the curl-impersonate build system to make it work on Red Hat
based systems such as CentOS, Fedora and Amazon Linux.
* Change the Makefile to be more portable with different bash versions.
* Detect whether cmake is 'cmake' or 'cmake3', and whether ninja is
'ninja' or 'ninja-build'.
* Explicitly tell brotli to put its libraries in 'lib' dir, otherwise it
might put them in 'lib64' where curl doesn't find them.
* Add instructions for building curl-impersonate from source on Red Hat
based systems.
curl_easy_reset() may be used by an application to reset the options on
a curl handle. If an app has the CURL_IMPERSONATE env var defined, then
the impersonation options are automatically set in curl_easy_init() but
will be cleared in a call to curl_easy_reset(). The desired behavior is
for the impersonation options to be retained (as they are "transparent"
to the user), which this commit takes care of.
Note that this only has an effect when libcurl-impersonate is loaded and
the CURL_IMPERSONATE env var is set. Otherwise the regular behavior of
resetting all the handle options is retained.
Test that the unique TLS signature of curl-impersonate is preserved
after a call to curl_easy_reset() when libcurl-impersonate is loaded.
For this purpose change the 'minicurl' testing util to support multiple
URLs and launch it with 2 different URLs when testing the TLS signature.
.. and for Edge 101 as well. The TLS fingerprint is identical to
previous versions. The HTTP headers have the usual differences in the
user agents. One important change though is in the way the HTTP2
SETTINGS frame is formed. Up until Chrome 98, there was an additional
randomly-generated setting in the frame. This seems to have been removed
since. Therefore it was removed from curl-impersonate as well, and
support for Chrome/Edge 98 was deprecated, since supporting both
signatures requires a lot of work.
Add support for impersonating Firefox 100. The TLS signature is identical to
previous versions of Firefox.
In addition, upgrade NSS (Firefox's TLS library) to version 3.77 used by
Firefox 100. This is not strictly necessary as the previous version used
works just fine, but it's better keep up with the newest version.
When reusing a curl handle on which the 'Host' header was explicitly
set, the previously-set header was being kept in use for following
requests.
The issue was in curl-impersonate's merging of user-supplied headers
with its own list of browser headers. The call to
Curl_http_merge_headers() which takes care of this had been placed after
the handling of the host header, which caused the previous one to be
used.
* Set curl's HTTP/2 window size to match Chrome and Firefox. This
affects the "Window Size Increment" parameter in the WINDOW_UPDATE
HTTP/2 frame sent out by curl, which was different than the one
sent by Chrome or Firefox.
* Set curl's HTTP/2 SETTINGS frame to match Firefox.
libnssckbi is loaded at runtime by NSS. On some systems it is located in
a non-standard location that dlopen() can't find. For example, in Ubuntu
it may be in /usr/lib/x86_64-linux-gnu and on Mac M1 in
/opt/homebrew/nss. This becomes a problem when you static link NSS.
Search for libnssckbi in the configure script and add the relevant path
using '-rpath' linker flag. In addition, drop the previous hack for
Ubuntu that searched libnssckbi in a hardcoded location.
Seems like the tests may fail because we don't wait for nghttpd (the
HTTP2 server used for testing) to actually start listening.
This commit uses asyncio to launch nghttpd and read its stdout until it
starts listening.
Injecting libcurl-impersonate with LD_PRELOAD is supported on
Linux only. On Mac there is DYLD_INSERT_LIBRARIES but it
reuqires more work to be fully functional.
This is an attmept to link with NSS statically on macOS (both Intel and
Apple M1).
Statically linking with NSS is a total mess and completely
undocumented. There are 20+ .a files to link with, and their linking
order matters. The main reference for this commit is a Mozilla Rust code
responsible for statically linking NSS:
b2690fd2e4/components/support/rc_crypto/nss/nss_b uild_common/src/lib.rs#L94
Unfortunately, even with that in hand, a lot of hacking is needed to
make it all work.
Add a few commands to the Dockerfile to check that 'curl-impersonate'
was compiled correctly: Check that it has brotli, http2 and tls support,
and check that the dependencies were compiled statically.
These are basic checks which are useful when modifying the Dockerfile:
Sometimes even small modifications cause curl to be compiled
incorrectly but without failing the build.
Previously '-l:nghttp2.a' was used to specify static linking with
nghttp2 and to stop the linker from linking dynamically with
libnghttp2.so. This way of linking is not supported on macOS. Instead,
add '--disable-shared' to prevent libnghttp2.so from even being
compiled. This way the linker will find the static library only and link
against it.