Release EDDN - Elite Dangerous Data Network. Trading tools sharing info in an unified way.

Status
Thread Closed: Not open for further replies.
Thanks to themroc, I have been able to put together a mapreduce program using Apache Spark to calculate the best trade route pairs in the galaxy. I will probably expand it to finding the best cycles, simply because I think longer cycles might actually produce better credit/hour returns than strict pairs of trades. Right now using Slopey's BPC data or EDDB data, I can run a galaxy-wide search for trade routes of 20 Ly or less in about 30 seconds on a 2012 Macbook Pro. Getting Spark to run on windows is a bit of a challenge because it you need to setup Hadoop for Windows, but on Linux/BSD/Mac based systems this should be pretty easy.

Repo is on Github at huadianz/elite-trade-analyzer. Can't post links apparently.

You'll be able to post links after the second post, I think. I'm trying to give your tool a spin on my linux box, but I can't find spark-submit in the distribution (under ubuntu). Where can this be found, or can you post the contents of spark-submit?

EDIT: nvm, building spark from github, will let you know if I got it running
 
Last edited:
You'll be able to post links after the second post, I think. I'm trying to give your tool a spin on my linux box, but I can't find spark-submit in the distribution (under ubuntu). Where can this be found, or can you post the contents of spark-submit?

EDIT: nvm, building spark from github, will let you know if I got it running

There are prebuilt binaries available on the Apache Spark site. If you want a package managed version, Cloudera's repository has spark and hadoop binaries.
 
There are prebuilt binaries available on the Apache Spark site. If you want a package managed version, Cloudera's repository has spark and hadoop binaries.

I've got a worker thread crashing, I'll investigate tomorrow. It's a VM with 1GB RAM, but it might not be enough for this.

Code:
15/01/20 00:54:07 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 8)
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:169)
        at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:173)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:95)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:192)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:109)
        ... 14 more
15/01/20 00:54:07 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 8, localhost): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:169)
        at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:173)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:95)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:192)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:109)
        ... 14 more

15/01/20 00:54:07 ERROR TaskSetManager: Task 0 in stage 4.0 failed 1 times; aborting job
15/01/20 00:54:07 INFO TaskSchedulerImpl: Cancelling stage 4
15/01/20 00:54:07 INFO Executor: Executor is trying to kill task 1.0 in stage 4.0 (TID 9)
15/01/20 00:54:07 INFO TaskSchedulerImpl: Stage 4 was cancelled
15/01/20 00:54:07 INFO DAGScheduler: Job 2 failed: collect at /home/andargor/dev/elite-trade-analyzer/spark.py:81, took 10.652747 s
15/01/20 00:54:07 WARN PythonRDD: Incomplete task interrupted: Attempting to kill Python Worker
15/01/20 00:54:07 INFO Executor: Executor killed task 1.0 in stage 4.0 (TID 9)
15/01/20 00:54:07 WARN TaskSetManager: Lost task 1.0 in stage 4.0 (TID 9, localhost): TaskKilled (killed intentionally)
15/01/20 00:54:07 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
Traceback (most recent call last):
  File "/home/andargor/dev/elite-trade-analyzer/spark.py", line 187, in <module>
    Main(args.maxjumpdistance, args.currentsystem, args.searchradius)
  File "/home/andargor/dev/elite-trade-analyzer/spark.py", line 81, in Main
    stationCommoditiesTable = {system[0]: system[1] for system in stationCommodities.map(StationCommodityMap).groupByKey().map(StationMap).collect()}
  File "/home/andargor/dev/spark/python/pyspark/rdd.py", line 675, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/home/andargor/dev/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/home/andargor/dev/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o130.collect.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 8, localhost): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:169)
        at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:173)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:95)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:192)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:109)
        ... 14 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:684)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:684)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:684)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1366)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1327)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
 
I've got a worker thread crashing, I'll investigate tomorrow. It's a VM with 1GB RAM, but it might not be enough for this.

Code:
15/01/20 00:54:07 ERROR Executor: Exception in task 0.0 in stage 4.0 (TID 8)
org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:169)
        at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:173)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:95)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:192)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:109)
        ... 14 more
15/01/20 00:54:07 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 8, localhost): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:169)
        at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:173)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:95)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:192)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:109)
        ... 14 more

15/01/20 00:54:07 ERROR TaskSetManager: Task 0 in stage 4.0 failed 1 times; aborting job
15/01/20 00:54:07 INFO TaskSchedulerImpl: Cancelling stage 4
15/01/20 00:54:07 INFO Executor: Executor is trying to kill task 1.0 in stage 4.0 (TID 9)
15/01/20 00:54:07 INFO TaskSchedulerImpl: Stage 4 was cancelled
15/01/20 00:54:07 INFO DAGScheduler: Job 2 failed: collect at /home/andargor/dev/elite-trade-analyzer/spark.py:81, took 10.652747 s
15/01/20 00:54:07 WARN PythonRDD: Incomplete task interrupted: Attempting to kill Python Worker
15/01/20 00:54:07 INFO Executor: Executor killed task 1.0 in stage 4.0 (TID 9)
15/01/20 00:54:07 WARN TaskSetManager: Lost task 1.0 in stage 4.0 (TID 9, localhost): TaskKilled (killed intentionally)
15/01/20 00:54:07 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
Traceback (most recent call last):
  File "/home/andargor/dev/elite-trade-analyzer/spark.py", line 187, in <module>
    Main(args.maxjumpdistance, args.currentsystem, args.searchradius)
  File "/home/andargor/dev/elite-trade-analyzer/spark.py", line 81, in Main
    stationCommoditiesTable = {system[0]: system[1] for system in stationCommodities.map(StationCommodityMap).groupByKey().map(StationMap).collect()}
  File "/home/andargor/dev/spark/python/pyspark/rdd.py", line 675, in collect
    bytesInJava = self._jrdd.collect().iterator()
  File "/home/andargor/dev/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__
  File "/home/andargor/dev/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o130.collect.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 4.0 failed 1 times, most recent failure: Lost task 0.0 in stage 4.0 (TID 8, localhost): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:169)
        at org.apache.spark.api.python.PythonRDD$$anon$1.<init>(PythonRDD.scala:173)
        at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:95)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.api.python.PairwiseRDD.compute(PythonRDD.scala:304)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:264)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:231)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        at org.apache.spark.scheduler.Task.run(Task.scala:64)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:192)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
        at java.io.DataInputStream.readInt(DataInputStream.java:392)
        at org.apache.spark.api.python.PythonRDD$$anon$1.read(PythonRDD.scala:109)
        ... 14 more

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1185)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1174)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1173)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1173)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:684)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:684)
        at scala.Option.foreach(Option.scala:236)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:684)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1366)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1327)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)

By default, the script will search through every set of systems in the galaxy. You have to provide the max FSD distance as the final argument, but you probably also want to set the --currentsystem and --searchradius arguments. It would look something like "spark-submit --currentsystem Lave --searchradius 50 15.43" to find routes of 15.43 Ly or less in the systems 50 Ly from Lave.
 
Is the EDDN down? Were there any changes to EDDN? I tested the upload of data from EliteOCR both 0.3.8.1 and 0.4.1. None appeared on the client from github... (it runs now for 2 hours)
 
Is the EDDN down? Were there any changes to EDDN? I tested the upload of data from EliteOCR both 0.3.8.1 and 0.4.1. None appeared on the client from github... (it runs now for 2 hours)

A couple of people have said that they're not receiving messages. While it all seems to be working fine (I'm receiving messages myself) I've restarted the gateway and relay processes just in case.
 
A couple of people have said that they're not receiving messages. While it all seems to be working fine (I'm receiving messages myself) I've restarted the gateway and relay processes just in case.
Do you run it from the example Client.py (as on github)? It worked for me in the past but does nothing now...
EDIT:
It works again!
 
Last edited:

wolverine2710

Tutorial & Guide Writer
Thanks to themroc, I have been able to put together a mapreduce program using Apache Spark to calculate the best trade route pairs in the galaxy. I will probably expand it to finding the best cycles, simply because I think longer cycles might actually produce better credit/hour returns than strict pairs of trades. Right now using Slopey's BPC data or EDDB data, I can run a galaxy-wide search for trade routes of 20 Ly or less in about 30 seconds on a 2012 Macbook Pro. Getting Spark to run on windows is a bit of a challenge because it you need to setup Hadoop for Windows, but on Linux/BSD/Mac based systems this should be pretty easy.

Repo is on Github at huadianz/elite-trade-analyzer. Can't post links apparently.

Github repository.
 
Last edited:

wolverine2710

Tutorial & Guide Writer
I've created a new section on the OP: Tools created for EDDN and/or using it. I'm going to reread this whole thread in reverse order and put each and every tool I''ll find in it. Later it will be copied to the EDDN section of the 3rd party tools thread.

The (un)lucky first one - first entry.

Name: Elite Trade Analyzer
Author: Pandemic
Description: Trading tool.
Website: N/A
Source available: Yes. Open source license: not specified. Gitub repository.
Thread: N/A
Compatible with Release: Yes
EDDN support: SUB
Value added service: N/A
Comments: This tool will analyze and find the best routes for trade data persuant to several parameters: Proximity to current location, distance between trade stations and current commodity prices and supplies

Pandemic and others. Let me know if this is OK, otherwise I will change it.
 
Last edited:

wolverine2710

Tutorial & Guide Writer
Last edited:

wolverine2710

Tutorial & Guide Writer
Added to the OP, section "Tools created for EDDN and/or using it"

Name:
Author: Askarr
Description: Archive for EDDN messages.
Website: N/A
Source available: No
Thread: N/A
Compatible with Release: Yes
EDDN support: SUB
Value added service (VAS): Archiving
Comments: Announcement here, with code sample of how to use it. More info about usage here. Note: You must first ask for an key before you can use it.

@Askar. You might want to consider creating a new thread for this. Makes linking to it much easier and users can comment on it. The VAS is to good to be snowed under...

Questions:

  • Are currently commanders using your VAS archive?
  • Have you by any change made progress with "wolverines Xmas tree" ;-)

Normally I would have PM-ed you but that's not possible. Have you chosen to disable that?
 
Last edited:
If anyone wants a "known good" list of stations, these are the stations I've been at, and they are good (trust me). If you find this interesting, I will keep updating it.

EDIT: Hmm, it might not be as "known good" as I would like (see LHS 3877) maybe the *cough* source *cough* is not as reliable as I thought


Code:
Adeo (Dobrovolski City)
Adeo (Ramelli Dock)
Aequeelg (Robinson Station)
Aiabiko (Gooch Terminal)
Aiabiko (Haber Dock)
Aiabiko (Haisheng Port)
Aiabiko (Jett Ring)
Aiabiko (Maxwell Orbital)
Aiabiko (Savitskaya Station)
Aiabiko (Treshchov Port)
Aiabiko (Weston Dock)
Aiabiko (White Orbital)
Alectrona (Marley City)
Alectrona (Russ City)
Andlit (Hilmers Ring)
Anlave (Hogg City)
Anlave (Kobayashi City)
Anlave (Suri Park)
Aoniu (Derekas Prospect)
Atese (Miyasaka Ring)
Bast (White Hart Lane)
Belalans (Ban Hub)
Belalans (Boscovich Ring)
Belalans (Luu Dock)
Belalans (Mandel Dock)
CD-75 661 (Kirk Dock)
CD-75 661 (Neujmin Station)
Chemaku (Crampton Port)
Chemaku (Ferguson City)
Chemaku (Kaku Orbital)
Chemaku (Lawson Station)
Chi Eridani (Steve Masters)
Chi Eridani (The Ascending Phoenix)
Ciguru (Hui Mines)
Cochipati (Nye Station)
Damna (Nemere Market)
Delta Pavonis (Hooper Relay)
Delta Phoenicis (Trading Post)
Derrim (Nachtigal Hub)
Ethgreze (Bloch Station)
Gakiutl (Bykovsky Orbital)
George Pantazis (Zamka Platform)
Gwaelod (von Helmont Terminal)
HIP 81998 (Napier Ring)
Hach (Evans City)
Hach (Galilei Station)
Hach (Zudov Orbital)
Hecate (RJH1972)
Hedetet (Forfait Port)
Heike (Braun Enterprise)
Heike (Brunel City)
Ho Hsien (Dutton Station)
Hyldekagati (Brunel Station)
Inktasa (Oleskiw Station)
Jaroua (McCool City)
Jaroua (Schwarzschild Port)
Jaurinani (Ellison Co-operative)
Kamitra (Hammel Terminal)
Kornephoros (Worden Station)
LFT 1231 (Laird Vision)
LHS 3877 (Foreman City)
LHS 3877 (Shaw City)
LHS 3877 ( Ring)
LHS 3877 (Williams Port)
LHS 64 (McArthur Colony)
LHS 64 (Wiberg Hanger)
LP 825-559 (Larson Terminal)
LP 906-9 (Hawking Enterprise)
LTT 4586 (Dana Dock)
Lalande 30699 (Filipchenko Gateway)
Latucano (Jeury Hub)
Lave (Lave Station)
Leesti (George Lucas)
Lhanayi (Hopkinson Gateway)
Liu Di (Sudworth Orbital)
MCC 858 (Al-Farabi Port)
MCC 858 (Rennie City)
Minmar (Bayliss Landing)
Moirai (Scoutrix Prime)
Murarija (Prunariu Platform)
Murungh (Weston Plant)
NLTT 49528 (Anning Hub)
NLTT 49528 (O'Connor Landing)
NLTT 49528 (Titov Settlement)
NLTT 53889 (Holberg Hub)
NLTT 53889 (Pordenone Port)
Nauahtunu (Brunner Survey)
Nidanga (Blodgett Colony)
Njangari (Lee Hub)
Orrere (Sharon Lee Free Market)
PPM 41187 (Burnet Port)
Penai (Oshima Dock)
Psamathe (Witt Gateway)
Purut (Liouville Plant)
Ross 765 (Pogue Platform)
Saraj (Sturckow Dock)
Semali (Piserchia Dock)
Shinrarta Dezhra (Jameson Memorial)
Tangaroa (ASYLUM)
Tanmark (Cassie-L-Peia)
Tau Sagittarii (Fourneyron Orbital)
Thrutis (Kingsbury Dock)
Toxandji (Tsunenaga Orbital)
Tripu (Chargaff Orbital)
Tujia (Asire Port)
Vitoa (Mitchell Dock)
Wolf 751 (Reightler Settlement)
Xihe (Zhen Dock)
Yarigui (Kier Ring)
Yarigui (Lem Dock)
Yarigui (Moore Orbital)
Yarigui (Salgari Hub)
Yarigui (Walker Ring)
Ye'kuapemas (Kekule Orbital)
Zaonce (Ridley Scott)
Zeessze (Nicollier Hanger)
q1 Eridani (Windt Terminal)
 
Last edited:
Today the feed stopped sending me updates for some reason. If it weren't for Quazil's being open at the same time, I might not have noticed. I had to restart my client.

Are there any ZMQ tricks/parameters to set for it to reconnect or is this supposed to be automagic?
 
Code:
Today the feed stopped sending me updates for some reason. If it weren't for Quazil's being open at the same time, I might not have noticed. I had to restart my client.

Are there any ZMQ tricks/parameters to set for it to reconnect or is this supposed to be automagic?

ZeroMQ will reconnect automatically, but follows an exponential back-off curve as reconnections fail. However, PUB/SUB socket pairs will not resend data that you miss. There is no way to detect disconnections with ZeroMQ as it was not designed for over-the-internet peering where clients drop out all the time. The PUB server will leak more and more memory as clients drop out and reconnect.
 
Today the feed stopped sending me updates for some reason. If it weren't for Quazil's being open at the same time, I might not have noticed. I had to restart my client.

Are there any ZMQ tricks/parameters to set for it to reconnect or is this supposed to be automagic?

Check out how I do it
So what you wanna do, is to set a timeout on ZMQ. This will not prevent you from missing some data, but after a certain time you're back on track :)
 
Last edited:
Check out how I do it
So what you wanna do, is to set a timeout on ZMQ. This will not prevent you from missing some data, but after a certain time you're back on track :)

Ah, cool, ZMQ :: SOCKOPT_RCVTIMEO. I'm using a node.js client, but that gives the example I need thanks. :)

EDIT: Bleh, the node.js module doesn't have that socket option, I need to do it manually.
 
Last edited:
Ah, cool, ZMQ :: SOCKOPT_RCVTIMEO. I'm using a node.js client, but that gives the example I need thanks. :)

EDIT: Bleh, the node.js module doesn't have that socket option, I need to do it manually.

I had to use a pretty new version to make it work too.
 
Check out how I do it
So what you wanna do, is to set a timeout on ZMQ. This will not prevent you from missing some data, but after a certain time you're back on track :)

Have you confirmed this works? I tried turning on TCP keep-alive and it did not work for me with NetMQ. I had to implement a reconnect myself.

It appears to be a fixed amount of time around 10 minutes which sounds like a TCP socket time-out. This would be a flaw in the 0MQ design/implementation if they don't do something to keep the TCP socket alive for you.
 
Have you confirmed this works? I tried turning on TCP keep-alive and it did not work for me with NetMQ. I had to implement a reconnect myself.

It appears to be a fixed amount of time around 10 minutes which sounds like a TCP socket time-out. This would be a flaw in the 0MQ design/implementation if they don't do something to keep the TCP socket alive for you.

It's feeding eddb for weeks without a single outtake...
 
Questions:
  • Are currently commanders using your VAS archive?
  • Have you by any change made progress with "wolverines Xmas tree" ;-)
Normally I would have PM-ed you but that's not possible. Have you chosen to disable that?
Oh blast, no wonder nobody has been PMing me :eek:

I've received no requests as yet (though the above might have something to do with that!) and yes Christmas tree is ongoing - maybe an alpha soon.

Edit: Enabled PMs. Let me know if still can't do so.
 
Last edited:
Status
Thread Closed: Not open for further replies.
Back
Top Bottom