(Replying to PARENT post)
While there may be several things missing for many productive use cases (especially inserts/updates), I think QLever (https://github.com/ad-freiburg/QLever) fits that description very well. There's also a public endpoint linked there.
๐คbjoernbu๐5y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
Under 10 seconds on which query? If your query involves big joins over a large distributed data-set there won't be a technology that can do it for you. SPARQL is not the problem, you can write the same queries in Cypher or any other language, you will hit the same performance problems
๐คmatteuan๐5y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
I'm not sure exactly what you mean by implementation here, but many (most?) of the Wikidata examples (on its public endpoint) are very fast, i.e.: https://query.wikidata.org/#%23Cats%0ASELECT%20%3Fitem%20%3F...
๐คvirgil_disgr4ce๐5y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
Of course there are. The SPARQL endpoints open for public access at no cost for the user, and potentially accessed by i-dont-know how many clients concurrently can't be used as benchmarks.
๐คtannhaeuser๐5y๐ผ0๐จ๏ธ0
(Replying to PARENT post)
Someone already mentioned that public endpoints aren't good benchmarks.
But there are many performant SPARQL-enabled databases (back when I wrote my Master's in 2014, that even included Oracle), and they are indeed quite performant - though there are details like batch/realtime materialization and the like.
In my experience, AllegroGraph was pretty fast enough (apparently someone even uses it now to translate between HL7 schemas of multiple providers in USA)
๐คp_l๐5y๐ผ0๐จ๏ธ0
(Replying to PARENT post)