(Replying to PARENT post)

Is there any SPARQL implementation that returns results under 10 seconds on a big dataset? Because, I never found a public SPARQL endpoint that gives remotely acceptable response time.
๐Ÿ‘คtasogare๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

While there may be several things missing for many productive use cases (especially inserts/updates), I think QLever (https://github.com/ad-freiburg/QLever) fits that description very well. There's also a public endpoint linked there.
๐Ÿ‘คbjoernbu๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Under 10 seconds on which query? If your query involves big joins over a large distributed data-set there won't be a technology that can do it for you. SPARQL is not the problem, you can write the same queries in Cypher or any other language, you will hit the same performance problems
๐Ÿ‘คmatteuan๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

I'm not sure exactly what you mean by implementation here, but many (most?) of the Wikidata examples (on its public endpoint) are very fast, i.e.: https://query.wikidata.org/#%23Cats%0ASELECT%20%3Fitem%20%3F...
๐Ÿ‘คvirgil_disgr4ce๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Of course there are. The SPARQL endpoints open for public access at no cost for the user, and potentially accessed by i-dont-know how many clients concurrently can't be used as benchmarks.
๐Ÿ‘คtannhaeuser๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0

(Replying to PARENT post)

Someone already mentioned that public endpoints aren't good benchmarks.

But there are many performant SPARQL-enabled databases (back when I wrote my Master's in 2014, that even included Oracle), and they are indeed quite performant - though there are details like batch/realtime materialization and the like.

In my experience, AllegroGraph was pretty fast enough (apparently someone even uses it now to translate between HL7 schemas of multiple providers in USA)

๐Ÿ‘คp_l๐Ÿ•‘5y๐Ÿ”ผ0๐Ÿ—จ๏ธ0