* [mod] /image_proxy: don't decompress images
* [fix] image_proxy: always close the httpx respone
previously, when the content type was not an image and some other error,
the httpx response was not closed
* [mod] /image_proxy: use HTTP/1 instead of HTTP/2
httpx: HTTP/2 is slow when a lot data is downloaded.
https://github.com/dalf/pyhttp-benchmark
also, the usage of HTTP/1 decreases the load average
* [mod] searx.utils.dict_subset: rewrite with comprehension
Co-authored-by: Alexandre Flament <alex@al-f.net>
* [enh] Add Tineye reverse image search
Other optional parametesr:
"&sort=crawl_date" can be appended to search_string to sort results by date.
"&domain=example.org" can be implemented to search_string to get results from just one domain.
Public instances could get relatively fast timed-out for 3600s.
* [enh] Add TIneye to settings.yml
Check if that's the right shortcut.
* [mod] Fix checks
* [mod] Try to fix checks
* [mod] Use Four spaces for indentation
And set paging back to True
Co-authored-by: Noémi Ványi <kvch@users.noreply.github.com>
In case of CAPTCHA raise a SearxEngineCaptchaException and suspend for 7 days.
When get_sc_code() fails raise a SearxEngineResponseException and suspend for 7
days.
[1] https://github.com/searxng/searxng/pull/695
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Startpage has introduced new anti-scraping measures that make SearXNG instances
run into captchas:
1. some arguments has been removed and a new `sc` has been added.
2. search path changed from `do/search` to `sp/search`
3. POST request is no longer needed
Closes: https://github.com/searxng/searxng/issues/692
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
## What does this PR do?
Gives the user the possibility to search their own prowlarr instances.
Info: https://wiki.servarr.com/en/prowlarr
Github: https://github.com/Prowlarr/Prowlarr
## Why is this change important?
Prowlarr searchs multiple upstream search providers, thus allows to use that functionality through searx.
* [enh] Add autocompleter from Brave
Raw response example: https://search.brave.com/api/suggest?q=how%20to:%20with%20j
Headers are needed in order to get a 200 response, thus Searx user-agent is used.
Other URL param could be '&rich=false' or '&rich=true'.
Languages are supported by mapping the language to a domain. If domain is not
found in :py:obj:`lang2domain` URL ``<lang>.search.yahoo.com`` is used.
Closes#3020
Signed-off-by: Markus Heiser <markus.heiser@darmarit.de>
Add configurable setting to rank search results higher when part of the
domain (e.g. 'en' in 'en.wikipedia.org' or 'de' in 'beispiel.de')
matches the selected search language. Does not apply to e.g. 'be' in
'youtube.com'.
Closes#206
The patch introduced earlier broke the behaviour for instance
admins running searx from packages. This fix aims to provide
compatibility for everyone.
Closes#3061