Discussion:
Moving away from cron jobs to some workflow manager
Rabin Yasharzadehe
2018-06-19 06:06:56 UTC
Permalink
Hi all,

I need some advice, currently I have a huge cron file which schedules tasks
one after anther, and each task is position precisely (with some room for
error) to start after it predecessor.

So if one job start at 00:00 and it will go and fetch some files and it
takes 3minutes
the next job will be after start right after at ~00:05
and so on

the problem is that if one job fails, all other jobs which are depend on
him will fail as well, and then I get a shitload of alerts, and the worst
part is that if i have to manually start a batch process I need to go to
each machine and manually start each job in the right order,

I was looking to resolve this problem with a tool which can manage this
"pipe line"
and I cam across several tools like Luigi and (apache-)AirFlow, I started
with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make it to
work, the jobs queue never executed. =(

Has any one have experience with airflow, or other tool like it which they
can recommend ?
My needs are to be able to execute my CURRENT shell/python/php scripts and
build the dependency between them, and I perfer the option for remote exec
so that I will have central
place to manage and monitor all work flow whichs are executed on several
nodes.

Thanks in advance,
Rabin
Rabin Yasharzadehe
2018-06-19 06:41:30 UTC
Permalink
never heard of it,
but from reading the manual and the 10minute presentation ,
it's seems like it is more suitable for data crunching, where you have a
pool
of compute resources and you submit jobs to it.

my case is a bit different, where I have many jobs which need to run
(orchestrated) on there own hosts
with a specific environment and setup.


--
Rabin
Why not a minimal deploy of SGE - which would also allow you to make
multi-executor?
https://arc.liv.ac.uk/trac/SGE
—mav
Marc Volovic
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with some room
for error) to start after it predecessor.
Post by Rabin Yasharzadehe
So if one job start at 00:00 and it will go and fetch some files and it
takes 3minutes
Post by Rabin Yasharzadehe
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend on
him will fail as well, and then I get a shitload of alerts, and the worst
part is that if i have to manually start a batch process I need to go to
each machine and manually start each job in the right order,
Post by Rabin Yasharzadehe
I was looking to resolve this problem with a tool which can manage this
"pipe line"
Post by Rabin Yasharzadehe
and I cam across several tools like Luigi and (apache-)AirFlow, I
started with Luigi but It didn't look
Post by Rabin Yasharzadehe
right for the job, and then I tried airflow, but was not able to make it
to work, the jobs queue never executed. =(
Post by Rabin Yasharzadehe
Has any one have experience with airflow, or other tool like it which
they can recommend ?
Post by Rabin Yasharzadehe
My needs are to be able to execute my CURRENT shell/python/php scripts
and build the dependency between them, and I perfer the option for remote
exec so that I will have central
Post by Rabin Yasharzadehe
place to manage and monitor all work flow whichs are executed on several
nodes.
Post by Rabin Yasharzadehe
Thanks in advance,
Rabin
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Rabin Yasharzadehe
2018-06-19 08:20:48 UTC
Permalink
I'll have to read the documentation to learn more,
but this project seems barely maintained as only minor versions each year
or two (last release was 2 years ago),
that doesn't give a lot of confidence.

but i'll check it out
thanks.

--
Rabin
Hi,
It is intended for submitting multiple jobs for crunching. But you can use
it (SOGE) or SLURM for issuing job and dependent jobs, even on a single
machine issuer/execution host. It can be used as a resource aware job
scheduler.
—mav
Marc Volovic
Post by Rabin Yasharzadehe
never heard of it,
but from reading the manual and the 10minute presentation ,
it's seems like it is more suitable for data crunching, where you have a
pool
Post by Rabin Yasharzadehe
of compute resources and you submit jobs to it.
my case is a bit different, where I have many jobs which need to run
(orchestrated) on there own hosts
Post by Rabin Yasharzadehe
with a specific environment and setup.
--
Rabin
Why not a minimal deploy of SGE - which would also allow you to make
multi-executor?
Post by Rabin Yasharzadehe
https://arc.liv.ac.uk/trac/SGE
—mav
Marc Volovic
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with some room
for error) to start after it predecessor.
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
So if one job start at 00:00 and it will go and fetch some files and
it takes 3minutes
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend
on him will fail as well, and then I get a shitload of alerts, and the
worst part is that if i have to manually start a batch process I need to go
to each machine and manually start each job in the right order,
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
I was looking to resolve this problem with a tool which can manage
this "pipe line"
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
and I cam across several tools like Luigi and (apache-)AirFlow, I
started with Luigi but It didn't look
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
right for the job, and then I tried airflow, but was not able to make
it to work, the jobs queue never executed. =(
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
Has any one have experience with airflow, or other tool like it which
they can recommend ?
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
My needs are to be able to execute my CURRENT shell/python/php scripts
and build the dependency between them, and I perfer the option for remote
exec so that I will have central
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
place to manage and monitor all work flow whichs are executed on
several nodes.
Post by Rabin Yasharzadehe
Post by Rabin Yasharzadehe
Thanks in advance,
Rabin
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Omer Zak
2018-06-19 06:42:35 UTC
Permalink
For dependency management, you may want to use 'make' or modern
equivalents ('ant', 'gradle', etc.).
For controlling remote nodes, 'ansible' may be able to do the work.

--- Omer Zak
Hi all, 
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with
some room for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and
it takes 3minutes
the next job will be after start right after at ~00:05 
and so on 
the problem is that if one job fails, all other jobs which are depend
on him will fail as well, and then I get a shitload of alerts, and
the worst part is that if i have to manually start a batch process I
need to go to each machine and manually start each job in the right
order,
I was looking to resolve this problem with a tool which can manage
this "pipe line" 
and I cam across several tools like Luigi and (apache-)AirFlow, I
started with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make
it to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which
they can recommend ? 
My needs are to be able to execute my CURRENT shell/python/php
scripts and build the dependency between them, and I perfer the
option for remote exec so that I will have central 
place to manage and monitor all work flow whichs are executed on
several nodes.
--
More proof the End of the World has started. Just saw this online:
I think it's beginning! Ten minutes ago there was a group of people
waiting at the bus stop outside my house. Now, they're all gone!
My own blog is at https://tddpirate.zak.co.il/

My opinions, as expressed in this E-mail message, are mine alone.
They do not represent the official policy of any organization with which
I may be affiliated in any way.
WARNING TO SPAMMERS: at https://www.zak.co.il/spamwarning.html
Moish
2018-06-19 09:12:16 UTC
Permalink
Try GNUbatch.
Post by Omer Zak
For dependency management, you may want to use 'make' or modern
equivalents ('ant', 'gradle', etc.).
For controlling remote nodes, 'ansible' may be able to do the work.
--- Omer Zak
Hi all, 
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with
some room for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and
it takes 3minutes
the next job will be after start right after at ~00:05 
and so on 
the problem is that if one job fails, all other jobs which are depend
on him will fail as well, and then I get a shitload of alerts, and
the worst part is that if i have to manually start a batch process I
need to go to each machine and manually start each job in the right
order,
I was looking to resolve this problem with a tool which can manage
this "pipe line" 
and I cam across several tools like Luigi and (apache-)AirFlow, I
started with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make
it to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which
they can recommend ? 
My needs are to be able to execute my CURRENT shell/python/php
scripts and build the dependency between them, and I perfer the
option for remote exec so that I will have central 
place to manage and monitor all work flow whichs are executed on
several nodes.
--
I think it's beginning! Ten minutes ago there was a group of people
waiting at the bus stop outside my house. Now, they're all gone!
My own blog is at https://tddpirate.zak.co.il/
My opinions, as expressed in this E-mail message, are mine alone.
They do not represent the official policy of any organization with which
I may be affiliated in any way.
WARNING TO SPAMMERS: at https://www.zak.co.il/spamwarning.html
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Dimid Duchovny
2018-06-19 09:25:21 UTC
Permalink
Hi Rabin,


I'm far from being a linux expert, but isn't dependency between services
handled by systemd?
E.g. https://wiki.archlinux.org/index.php/Systemd/Timers

HTH
Post by Moish
Try GNUbatch.
Post by Omer Zak
For dependency management, you may want to use 'make' or modern
equivalents ('ant', 'gradle', etc.).
For controlling remote nodes, 'ansible' may be able to do the work.
--- Omer Zak
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with
some room for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and
it takes 3minutes
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend
on him will fail as well, and then I get a shitload of alerts, and
the worst part is that if i have to manually start a batch process I
need to go to each machine and manually start each job in the right
order,
I was looking to resolve this problem with a tool which can manage
this "pipe line"
and I cam across several tools like Luigi and (apache-)AirFlow, I
started with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make
it to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which
they can recommend ?
My needs are to be able to execute my CURRENT shell/python/php
scripts and build the dependency between them, and I perfer the
option for remote exec so that I will have central
place to manage and monitor all work flow whichs are executed on
several nodes.
--
I think it's beginning! Ten minutes ago there was a group of people
waiting at the bus stop outside my house. Now, they're all gone!
My own blog is at https://tddpirate.zak.co.il/
My opinions, as expressed in this E-mail message, are mine alone.
They do not represent the official policy of any organization with which
I may be affiliated in any way.
WARNING TO SPAMMERS: at https://www.zak.co.il/spamwarning.html
------------------------------
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Rabin Yasharzadehe
2018-06-19 09:32:30 UTC
Permalink
systemd is a complete different tool, which was not designed for this kinda
purpose.
(maybe in the future it will grow to be something like that ;-) )

I'm looking for something a bit more sophisticated then "go to this
machine" and "run this script" and "expect this result"
i like to define execution time limits (finish in 3 minute) and maybe some
grace time (can go up to 5 minute) and have the orchestrator monitor the
process and have a nice dashboard where i can see every thing from above
(this is why Airflow was looked so appealing, but the installation process
and the documentation are still lagging behind).


--
Rabin
Post by Dimid Duchovny
Hi Rabin,
I'm far from being a linux expert, but isn't dependency between services
handled by systemd?
E.g. https://wiki.archlinux.org/index.php/Systemd/Timers
HTH
Post by Moish
Try GNUbatch.
Post by Omer Zak
For dependency management, you may want to use 'make' or modern
equivalents ('ant', 'gradle', etc.).
For controlling remote nodes, 'ansible' may be able to do the work.
--- Omer Zak
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with
some room for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and
it takes 3minutes
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend
on him will fail as well, and then I get a shitload of alerts, and
the worst part is that if i have to manually start a batch process I
need to go to each machine and manually start each job in the right
order,
I was looking to resolve this problem with a tool which can manage
this "pipe line"
and I cam across several tools like Luigi and (apache-)AirFlow, I
started with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make
it to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which
they can recommend ?
My needs are to be able to execute my CURRENT shell/python/php
scripts and build the dependency between them, and I perfer the
option for remote exec so that I will have central
place to manage and monitor all work flow whichs are executed on
several nodes.
--
I think it's beginning! Ten minutes ago there was a group of people
waiting at the bus stop outside my house. Now, they're all gone!
My own blog is at https://tddpirate.zak.co.il/
My opinions, as expressed in this E-mail message, are mine alone.
They do not represent the official policy of any organization with which
I may be affiliated in any way.
WARNING TO SPAMMERS: at https://www.zak.co.il/spamwarning.html
------------------------------
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Omer Zak
2018-06-19 13:17:51 UTC
Permalink
1. Execution time limits:

Ansible has async with polling intervals. I did not research for
methods to kill hung tasks.
https://docs.ansible.com/ansible/latest/user_guide/playbooks_async.html

2. Dashboard-like functionality

According to:
https://www.reddit.com/r/ansible/comments/5ksphc/best_web_gui_for_run_a
nsible_playbooks/

There are the following options:
- ansible-tower
- remote-task-executor (did not look into it)
- nci-ansible-ui
- Jenkins (normally used for CI/CD setups)

In addition to the above, you may want to look into Ansible
alternatives:
- Puppet
- Chef
- SaltStack
A quick Google search yielded:
https://www.intigua.com/blog/puppet-vs.-chef-vs.-ansible-vs.-saltstack
Post by Rabin Yasharzadehe
systemd is a complete different tool, which was not designed for this
kinda purpose.
(maybe in the future it will grow to be something like that ;-) )
I'm looking for something a bit more sophisticated then "go to this
machine" and "run this script" and "expect this result"
i like to define execution time limits (finish in 3 minute) and maybe
some grace time (can go up to 5 minute) and have the orchestrator
monitor the process and have a nice dashboard where i can see every
thing from above (this is why Airflow was looked so appealing, but
the installation process and the documentation are still lagging
behind).
--
Rabin
Post by Dimid Duchovny
Hi Rabin,
I'm far from being a linux expert, but isn't dependency between
services handled by systemd?
E.g. https://wiki.archlinux.org/index.php/Systemd/Timers
HTH
Try GNUbatch. 
Post by Omer Zak
For dependency management, you may want to use 'make' or modern
equivalents ('ant', 'gradle', etc.).
For controlling remote nodes, 'ansible' may be able to do the work.
--- Omer Zak
 Hi all, 
 
 I need some advice, currently I have a huge cron file which
schedules
 tasks one after anther, and each task is position precisely
(with
 some room for error) to start after it predecessor.
 
 So if one job start at 00:00 and it will go and fetch some
files and
 it takes 3minutes
 the next job will be after start right after at ~00:05 
 and so on 
 
 the problem is that if one job fails, all other jobs which
are depend
 on him will fail as well, and then I get a shitload of
alerts, and
 the worst part is that if i have to manually start a batch
process I
 need to go to each machine and manually start each job in
the right
 order,
 
 I was looking to resolve this problem with a tool which can
manage
 this "pipe line" 
 and I cam across several tools like Luigi and (apache-
)AirFlow, I
 started with Luigi but It didn't look
 right for the job, and then I tried airflow, but was not
able to make
 it to work, the jobs queue never executed. =(
 
 Has any one have experience with airflow, or other tool like
it which
 they can recommend ? 
 My needs are to be able to execute my CURRENT
shell/python/php
 scripts and build the dependency between them, and I perfer
the
 option for remote exec so that I will have central 
 place to manage and monitor all work flow whichs are
executed on
 several nodes.
--
What happens if one mixes together evolution with time travel to the
past?  See: https://www.zak.co.il/ideas/stuff/opinions/eng/evol_tm.html
My own blog is at https://tddpirate.zak.co.il/

My opinions, as expressed in this E-mail message, are mine alone.
They do not represent the official policy of any organization with
which
I may be affiliated in any way.
WARNING TO SPAMMERS:  at https://www.zak.co.il/spamwarning.html
Steve Litt
2018-06-20 01:38:04 UTC
Permalink
On Tue, 19 Jun 2018 12:25:21 +0300
Post by Dimid Duchovny
Hi Rabin,
I'm far from being a linux expert, but isn't dependency between
services handled by systemd?
E.g. https://wiki.archlinux.org/index.php/Systemd/Timers
If you drive on that side of the road :-)

More seriously, I think you're intermixing these systemd timers and
systemd's (sort of) ability to delay running long running daemon B
until daemon A, which it depends on, is running.

But the OP's needs were much greater. Apparently he couldn't depend on
any of consecutively run programs to conclude in a certain amount of
time, and if he grants each one a crazy long amount of time, it would
exceed 24 hours. What's needed is for each process to provide some clue
that it's finished. Assuming its output files are those clues, Omer's
right: make could be used to not only do the job, but add some
parallellization so that if two processes' input is each complete,
those two processes can be run in tandem.

It's brilliant.

SteveT

Steve Litt
June 2018 featured book: Twenty Eight Tales of Troubleshooting
http://www.troubleshooters.com/28
Steve Litt
2018-06-20 01:29:43 UTC
Permalink
On Tue, 19 Jun 2018 09:42:35 +0300
Post by Omer Zak
For dependency management, you may want to use 'make'
If you can depend on each task to create specific files, yeah, that
sounds like a great idea. I should have thought of it.

And then you just put it in a loop so things are always being
progressed upon, and alarms to warn if a step takes too long.

SteveT

Steve Litt
June 2018 featured book: Twenty Eight Tales of Troubleshooting
http://www.troubleshooters.com/28
Steve Litt
2018-06-20 05:01:08 UTC
Permalink
On Tue, 19 Jun 2018 09:42:35 +0300
Post by Omer Zak
For dependency management, you may want to use 'make' or modern
Hi Omer,

While corresponding with someone offlist, I had another idea maybe as
good as using make. I could make a customized installation of the
process supervisor part of either the runit or s6 inits, or maybe even
use just plain daemontools, to make sure apps don't run until apps upon
which they depend have finished. So, for appC that depends on output
from appA and appB, then the run script for appC would look something
like the following:

#!/bin/sh
if appAnot_finished; then
sleep 60 # prevent excessive polling
elif appBnot_finished; then
sleep 60 # prevent excessive polling
else
exec appC
fi

If every app expresses its immediate prerequisites that way, the whole
thing will run very efficiently, and in many cases, in parallel where
not prevented by unfinished prerequisites.

Some more complexity would need to be added in order that appA and appB
don't start again before the entire bucket brigade finishes.


SteveT

Steve Litt
June 2018 featured book: Twenty Eight Tales of Troubleshooting
http://www.troubleshooters.com/28
Ari Becker
2018-06-19 13:09:35 UTC
Permalink
Hi Rabin,

Did you consider using Jenkins? It may be a little heavyweight, but it
should be relatively easy to set up and configure. You can use the same
scripts you're using today, the ability to state which jobs run on which
nodes, set up dependencies between them, set timeouts, set cron triggers to
start the initial job... seems to answer your requirements.

-Ari
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with some room
for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and it
takes 3minutes
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend on
him will fail as well, and then I get a shitload of alerts, and the worst
part is that if i have to manually start a batch process I need to go to
each machine and manually start each job in the right order,
I was looking to resolve this problem with a tool which can manage this
"pipe line"
and I cam across several tools like Luigi and (apache-)AirFlow, I started
with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make it
to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which they
can recommend ?
My needs are to be able to execute my CURRENT shell/python/php scripts and
build the dependency between them, and I perfer the option for remote exec
so that I will have central
place to manage and monitor all work flow whichs are executed on several
nodes.
Thanks in advance,
Rabin
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Moish
2018-06-19 13:19:23 UTC
Permalink
_______________________________________________
Linux-il mailing list
Linux-***@cs.huji.ac.il
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
linux.il
2018-06-19 13:35:47 UTC
Permalink
I suggest to check Jenkins (as already suggested) and Rundeck.
Vitaly
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with some room
for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and it
takes 3minutes
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend on
him will fail as well, and then I get a shitload of alerts, and the worst
part is that if i have to manually start a batch process I need to go to
each machine and manually start each job in the right order,
I was looking to resolve this problem with a tool which can manage this
"pipe line"
and I cam across several tools like Luigi and (apache-)AirFlow, I started
with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make it
to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which they
can recommend ?
My needs are to be able to execute my CURRENT shell/python/php scripts and
build the dependency between them, and I perfer the option for remote exec
so that I will have central
place to manage and monitor all work flow whichs are executed on several
nodes.
Thanks in advance,
Rabin
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Lior Okman
2018-06-20 05:41:55 UTC
Permalink
Hi,

You could try something like Concourse ( https://concourse-ci.org/ ). It
allows you to define a pipeline which is comprised of jobs and the order in
which they should be invoked.


--
Lior
Post by Rabin Yasharzadehe
Hi all,
I need some advice, currently I have a huge cron file which schedules
tasks one after anther, and each task is position precisely (with some room
for error) to start after it predecessor.
So if one job start at 00:00 and it will go and fetch some files and it
takes 3minutes
the next job will be after start right after at ~00:05
and so on
the problem is that if one job fails, all other jobs which are depend on
him will fail as well, and then I get a shitload of alerts, and the worst
part is that if i have to manually start a batch process I need to go to
each machine and manually start each job in the right order,
I was looking to resolve this problem with a tool which can manage this
"pipe line"
and I cam across several tools like Luigi and (apache-)AirFlow, I started
with Luigi but It didn't look
right for the job, and then I tried airflow, but was not able to make it
to work, the jobs queue never executed. =(
Has any one have experience with airflow, or other tool like it which they
can recommend ?
My needs are to be able to execute my CURRENT shell/python/php scripts and
build the dependency between them, and I perfer the option for remote exec
so that I will have central
place to manage and monitor all work flow whichs are executed on several
nodes.
Thanks in advance,
Rabin
_______________________________________________
Linux-il mailing list
http://mailman.cs.huji.ac.il/mailman/listinfo/linux-il
Loading...