0 votes
908 views
asked May 28 by datalab (1,810 points) | 908 views

1 Answer

0 votes

In principle, no. But if your query is expected to return a very large number of rows, it may trigger a timeout, or even an error message that your query is not suitable for synchronous mode.

In such case, please submit your query in asynchronous mode. In a Jupyter notebook, you would write:

from dl import queryClient as qc
query = # my complex or long query string
jobid = qc.query(token,query,async=True)
# and after a while...
status = qc.status(token,jobid)
if jobid == 'COMPLETED':
    result = qc.results(token,jobid)

Another problem, even for asynchronous mode, can be that you try to load too many rows into your memory (e.g. if you are working with the Jupyter notebook server). Instead, write out your query results directly to a file in your vospace:

jobid = qc.query(token,query,async=True,out='vos://results.csv')
answered May 30 by robnik (1,000 points)

60 questions

44 answers

0 comments

11 users

Welcome to Data Lab Help Desk, where you can ask questions and receive answers from other members of the community.