Remote Jobs

GELLIFY hosts ‘Smarter Working: Key tools for the digitalization of processes’ webinar on May 12 – ZAWYA

Sorry, we can’t find the page you are looking for.

It looks like this was the result of either:

  • A mistyped address
  • An out-of-date bookmark in your web browser
  • A broken link on our site
  • A broken link on a search engine results page
  • A broken link on someone else’s page

Some things to try:

  • Start again from the Zawya homepage
  • Use the navigation menu at the top
Remote Jobs

Innovation tools to help survive today’s business climate – ZAWYA

Dubai: Innovation platform GELLIFY hosts another series of webinar this month featuring expert opinions, solutions, and used cases on how digital innovation can mitigate business risks. Part of Future-proof Your Organization with ‘Black Swan’ Capabilities, each session will discuss various topics including: digital intrapreneurship, blockchain-backed supply chains, V/R and A/R remote collaboration, risk mitigation through predictive analytics and AI, and smart working cyber security.

May 5:  Can emergencies be a catalyst for the supply chain of the future?

Time:  2:00 pm – 3:00 pm

Register for free on: 

By digitizing the value chain, organizations can create a harmonic flow of information and materials, which ensures visibility on all processes and one’s ability to control them. In this session, GELLIFY will discuss how resilient, digitized supply chains can intercept weak signals, including: (1) mindset – to prepare for unexpected supply chain events; (2) skillset – to confront crises with speed and adaptability; and (3) toolset – to gain best practices for redesigning SCM processes and operating models.

May 12:  Smarter Working: Key tools for the digitalization of processes

Time: 2:00 pm – 3:00 pm

Register for free on: 

GELLIFY will illustrate how end-to-end digital (paperless) processes are key to safeguarding productivity and allow participants to select the right “tools” needed for a specific organization. This session will cover electronic signatures, authentication & authorization mechanisms, and remote identification​. GELLIFY will also discuss how security and data integrity enter into the equation when you choose a digital process technology. Is blockchain the answer?

May 19: There is no resilient Smart Business without Smart Employees and Mindful Leaders

Time: 2:00 pm – 3:00 pm

Register for free on: 

GELLIFY will discuss how to build the technology and mental stack for resilience, comprising: (1) mindset: the new principles of mindful leadership; (2) skillset: the techno-business hard skills and emotional IQ; and (3) toolset: a phygital workplace and knowledge sharing ecosystem, new organizational models, and objectives & key results.

For more information about the ‘Black Swan Series,’ please visit  or contact GELLIFY Middle East on



GELLIFY is an innovation platform that connects high-tech B2B startups with organizations to innovate their processes, products, and business models, through investments.

Headquartered in Italy with offices in Spain and UAE, the company’s success banks on its unique model which infuses businesses with the latest technologies from B2B startups and GELLIFY capabilities. GELLIFY brings startups from their embryonic “air” or “liquid” state to a reliable and scalable “solid” state, using its unique ‘GELLIFICATION’ growth program. This growth is funded through smart investments, supplied by GELLIFY and its co-investors.

GELLIFY has also built a community called ‘EXPLORE’ where entrepreneurs, innovators, and professionals can connect on any digital device. Through the app,  available to download from App Store and Google Playstore, participants can engage in phygital (physical and digital) experiences and take part in events, and infuse their businesses with the latest technologies from startups and GELLIFY capabilities. 

GELLIFY comprises three business units: (1) GELLIFY for Startups, which provides more comprehensive services than mentorship and basic services that are typical of incubators through its ‘gellification’ program; (2) GELLIFY for Companies, which provides open innovation services used to design and implement digital transformation of small businesses and large corporations; and (3) GELLIFY for Investors, which provides Investment advisory and the management of a GELLIFY Investment Fund on selected innovative B2B Tech Start-ups. 

For more information about GELLIFY in the Middle East, visit: 

Full press kit is available to download here: 

For more information, contact Twister Communications Middle East:
Sheila Tobias / Mai Touma
Email:  or
Office: +9714 432 1195
Mobile: +971 55 872 3009 or +971557684150

© Press Release 2020

Remote Jobs

SolarWinds Integrates Web Help Desk and DameWare(R) Remote Support to Accelerate IT Ticket Resolution

SolarWinds introduced new integration between SolarWinds Web Help Desk and DameWare Remote Support, empowering IT Pros to provide fast and direct IT incident support and reduce overall end-user performance disruption wherever end-users are working. (image: DameWare)

SolarWinds introduced new integration between SolarWinds Web Help Desk and DameWare Remote Support, empowering IT Pros to provide fast and direct IT incident support and reduce overall end-user performance disruption wherever end-users are working. (image: DameWare)


Enhanced IT Help Desk Ticketing Software Features New Capabilities for Delivering Live Technical Support for Remote End-Users, Asset Reporting and Documentation, Helping IT Pros Meet Rising Business Expectations and Bolstering Them With Critical IT Performance and Trend Data

AUSTIN, TX, Dec. 9 (Korea Bizwire) – SolarWinds (NYSE: SWI), a leading provider of powerful and affordable IT performance management software, today introduced new integration between SolarWinds Web Help Desk and DameWare Remote Support, empowering IT Pros to provide fast and direct IT incident support and reduce overall end-user performance disruption wherever end-users are working.

As end-user expectations for optimal IT performance are on the rise and IT departments are tasked with delivering near-immediate problem resolution, businesses are also becoming more global and mobile, increasingly supporting teleworking and travel to meet business needs. In a 2014 HDI survey of over 1,300 IT support center respondents, remote support was cited as the top technology needed to provide successful end-user desktop support, with 43 percent of organizations resolving half of their IT tickets remotely in 2014.

SolarWinds Web Help Desk and DameWare Remote Support, currently supporting over 25,000 organizations collectively, now integrate to allow IT Pros to remotely access and control end-users’ Windows, Linux, and Mac OS devices and immediately address their IT problems, all while recording ticket details including chat transcripts, asset status and other data to reduce downtime and optimize long-term business performance.

“IT Pros require direct access to the end-users’ devices to investigate and resolve their problems quickly and they need an automated solution for keeping track of those IT incidents and assets,” said Chris LaPoint, vice president of product management, SolarWinds. “SolarWinds Web Help Desk and DameWare Remote Support seamlessly integrate with the goal of solving end-user problems faster and enabling the automatic storage of IT incident resolution metrics; in this way, IT is able to provide unique insight into a business’ problem areas and apply appropriate tech solutions to fix them.”

SolarWinds Web Help Desk and DameWare Remote Support integration highlights
With the new integration between SolarWinds Web Help Desk and DameWare Report Support, IT Pros are able to launch remote support sessions directly from tickets and asset reports, essentially providing onsite support delivery for end-users working from home or on the go. IT Pros can:

  • Connect with a remote end-user’s computer or server directly from an automated support ticket or from an asset management record
  • Store critical information from completed support sessions into the ticket log and asset data, including chat transcripts, screenshots and data such as remote access duration

In addition, SolarWinds Web Help Desk features new asset reporting for easy monitoring of business-critical metrics including time-to-resolution and end-user satisfaction, enabling IT Pros to:

  • Maintain reports on assets, both hardware and software, for records, audit trails or to identify weak spots within an infrastructure
  • Generate reports about assets both at the aggregate and single-system levels, filtered by OS or model, location, incidents, warranty expirations and more

SolarWinds Web Help Desk features automated help desk ticketing for streamlined IT service management from request to resolution with rule-based routing and escalation, real-time tracking and SLA alerts, simplified IT asset management, tracking and reporting. Pricing starts at $695.

DameWare Remote Support provides remote access to Windows, Linux and Mac OS X desktops, laptops and servers for remote troubleshooting and management of servers and workstations. Admins can reboot systems; start and stop services and processes; copy or delete files; view and clear event logs; manage multiple AD domains, users and groups; remotely reset passwords; and gain access to Windows computers from iOS and Android mobile devices. Pricing starts at $349.

Pricing for both products includes the first year of maintenance. For more information, including a downloadable, free evaluation, visit the SolarWinds website or call 866.530.8100.

Additional Resources:

About SolarWinds
SolarWinds (NYSE:
SWI) provides powerful and affordable IT management software to customers worldwide from Fortune 500® enterprises to small businesses. In all of our market areas, our approach is consistent. We focus exclusively on IT Pros and strive to eliminate the complexity that they have been forced to accept from traditional enterprise software vendors. SolarWinds delivers on this commitment with unexpected simplicity through products that are easy to find, buy, use and maintain while providing the power to address any IT management problem on any scale. Our solutions are rooted in our deep connection to our user base, which interacts in our thwack® online community to solve problems, share technology and best practices, and directly participate in our product development process. Learn more today at

SolarWinds, and thwack are registered trademarks of SolarWinds. All other company and product names mentioned are used only for identification purposes and may be trademarks or registered trademarks of their respective companies.


Contact Information

Nicole Fachet
Phone: 212.871.3950

Courtney Cantwell
Phone: 512.682.9692

Source: SolarWinds via GLOBE NEWSWIRE

press release curation and disclaimer notice

Source link

Remote Jobs

16 ultimate SSH hacks | CSO Online

SSH tip #14: Verify server keys

You can see the fingerprint and randomart for any computer you’re logging into by configuring /etc/ssh/ssh_config on your client computer. Simply uncomment the VisualHostKey option and set it to yes:

VisualHostKey yes

Then login to any remote computer to test it:

$ ssh user@host2

Host key fingerprint is 66:a1:2a:23:4d:5c:8b:58:e7:ef:2f:e5:49:3b:3d:32

+–[ECDSA 256]—+

| |

| |

| . o . |

| + = . . . |

|. + o . S |

| o o oo |

|. + . .+ + |

| . o .. E o |

| .o.+ . |


user@host2’s password:

Obviously you need a secure method of getting verified copies of the fingerprint and randomart images for the computers you want to log into. Like a hand-delivered printed copy, encrypted email, the scpcommand, secure ftp, read over the telephone…The risk of a successful MITM attack is small, but if you can figure out a relatively painless verification method it’s cheap insurance.

SSH tip #13: Attach to a remote GNU screen session

You can attach a GNU screen session remotely over SSH; in this example we’ll open a GNU screen session on host1, and connect to it from host2. First open and then detach a screen session on host1, named testscreen:

host1 ~ $ screen -S testscreen

Then detach from your screen session with the keyboard combination Ctrl+a+d:

[detached from 3829.testscreen]

You can verify that it’s still there with this command:

host1 ~ $ screen -ls

There is a screen on:

3941.testscreen (03/18/2012 12:43:42 PM) (Detached)

1 Socket in /var/run/screen/S-host1.

Then re-attach to your screen session from host2:

host1 ~ $ ssh -t terry@uberpc screen -r testscreen

You don’t have to name the screen session if there is only one.

SSH tip #12: Launch a remote screen session

What if you don’t have a running screen session? No worries, because you can launch one remotely:

host1 ~ $ ssh -t user@host2 /usr/bin/screen -xRR

SSH tip #11: SSHFS is better than NFS

sshfs is better than NFS for a single user with multiple machines. I keep a herd of computers running because it’s part of my job to always be testing stuff. I like having nice friendly herds of computers. Some people collect Elvis plates, I gather computers. At any rate opening files one at a time over an SSH session for editing is slow; with sshfs you can mount entire directories from remote computers. First create a directory to mount your sshfs share in:

$ mkdir remote2

Then mount whatever remote directory you want like this:

$ sshfs user@remote2:/home/user/documents remote2/

Now you can browse the remote directory just as though it were local, and read, copy, move, and edit files all you want. The neat thing about sshfs is all you need is sshd running on your remote machines, and the sshfs command installed on your client PCs.

SSH tip #10: Log in and run a command in one step

You can log in and establish your SSH session and then run commands, but when you have a single command to run why not eliminate a step and do it with a single command? Suppose you want to power off a remote computer; you can log in and run the command in one step:

carla@local:~$ ssh user@remotehost sudo poweroff

This works for any command or script. (The example assumes you have a sudo user set up with appropriate restrictions, because allowing a root login over SSH is considered an unsafe practice.) What if you want to run a long complex command, and don’t want to type it out every time? One way is to put it in a Bash alias and use that. Another way is to put your long complex command in a text file and run it according to tip #9.

SSH tip #9: Putting long commands in text files

Put your long command in a plain text file on your local PC, and then use it this way to log in and run it on the remote PC:

carla@local:~$ ssh user@remotehost “`cat filename.txt`”

Mind that you use straight quotations marks and not fancy ones copied from a Web page, and back-ticks, not single apostrophes.

SSH tip #8: Copy public keys the easy way

The ssh-copy-id command is not as well-known as it should be, which is a shame because it is a great time-saver. This nifty command copies your public key to a remote host in the correct format, and to the correct directory. It even has a safety check that won’t let you copy a private key by mistake. Specify which key you want to copy, like this:

$ ssh-copy-id -i .ssh/ user@remote

SSH tip #7: Give SSH keys unique names

Speaking of key names, did you know you can name them anything you want? This helps when you’re administering a number of remote computers, like this example which creates then private key web-admin and public key

$ ssh-keygen -t rsa -f .ssh/web-admin

SSH tip #6: Give SSH keys informative comments

Another useful way to label keys is with a comment:

$ ssh-keygen -t rsa -C “downtown lan webserver” -f .ssh/web-admin

Then you can read your comment which is appended to the end of the public key.

SSH tip #5: Read public key comments

$ less .ssh/


[snip] KCLAqwTv8rhp downtown lan webserver

SSH tip #4: Logging in with server-specific keys

Then when you log in, specify which key to use with the -i switch:

$ ssh -i .ssh/ user@webserver

SSH tip #3: Fast easy known_hosts key management

I love this one because it’s a nice time-saver, and it keeps my ~/.ssh/known_hosts files tidy: using ssh-keygen to remove host keys from the ~/.ssh/known_hosts file. When the remote machine gets new SSH keys you’ll get a warning, when you try to log in, that the key has changed. Using this is much faster than manually editing the file and counting down to the correct line to delete:

$ ssh-keygen -R remote-hostname

Computers are supposed to make our lives easier, and it’s ever so lovely when they do.

SSH tip #2: SSH tunnel for road warriors

When you’re at the mercy of hotel and coffee shop Internet, a nice secure SSH tunnel makes your online adventures safer. To make this work you need a server that you control to act as a central node for escaping from hotspot follies. I have a server set up at home to accept remote SSH logins, and then use an SSH tunnel to route traffic through it. This is useful for a lot of different tasks. For example I can use my normal email client to send email, instead of hassling with Web mail or changing SMTP server configuration, and all traffic between my laptop and home server is encrypted. First create the tunnel to your personal server:

carla@hotel:~$ ssh -f -L -N

This binds port 9999 on your mobile machine to port 25 on your remote server. The remote port must be whatever you’ve configured your server to listen on. Then configure your mail client to use localhost:9999 as the SMTP server and you’re in business. I use Kmail, which lets me configure multiple SMTP server accounts and then choose which one I want to use when I send messages, or simply change the default with a mouse click. You can adapt this for any kind of service that you normally use from your home base, and need access to when you’re on the road.

#1 Favorite SSH tip: Evading silly web restrictions

The wise assumption is that any public Internet is untrustworthy, so you can tunnel your Web surfing too. My #1 SSH tip gets you past untrustworthy networks that might have snoopers, and past any barriers to unfettered Web-surfing. Just like in tip #2 you need a server that you control to act as a secure relay; first setup an SSH tunnel to this server:

carla@hotel:~$ ssh -D 9999 -C

Then configure your Web browser to use port 9999 as a SOCKS 5 proxy. Figure 1 shows how this looks in Firefox.

An easy way to test this is on your home or business network. Set up the tunnel to a neighboring PC and surf some external Web sites. When this works go back and change the SOCKS port number to the wrong number. This should prevent your Web browser from connecting to any sites, and you’ll know you set up your tunnel correctly.

How do you know which port numbers to use? Port numbers above 1024 do not require root privileges, so use these on your laptop or whatever you’re using in your travels. Always check /etc/services first to find unassigned ports. The remote port you’re binding to must be a port a server is listening on, and there has to be a path through your firewall to get to it.

To learn more try the excellent Pro OpenSSH by Michael Stahnke, and my own Linux Networking Cookbook has more on secure remote administration including SSH, OpenVPN, and remote graphical sessions, and configuring firewalls.

This article, “16 ultimate SSH hacks,” was originally published at ITworld. For the latest IT news, analysis and how-tos, follow ITworld on Twitter and Facebook.

Now read:A crash course in PostgreSQL8 strange places to find USB portsThe 10 best open source apps you never heard of

Source link

Remote Jobs

GT Helion Comp – first ride

It’s tempting for a manufacturer to drop the entry point on a range as low as possible. Has GT cut too many corners to cut the price of its Helion Comp or has budget suspension performance got a new benchmark?

Frame and equipment: impressive, bar a few teething troubles

GT launched its Independent Drivetrain suspension design nearly 20 years ago and it’s sticking to its guns with the latest evolution of the system. The downside on a discount bike is that it’s arguably the most complex suspension structure in widespread production.

The shock drives through the base of the seat tube so there’s no getting round a multi-part selection of curved hydroformed tube sections as well as intricate forgings for the seat tube straddle section, a big horseshoe section to mount the main pivot and the large two piece clamshell welded together to form the dangling bottom bracket section. Even the front derailleur is bolted onto a separate arm that’s in turn bolted to the underside of the seatstay bridge and has to be fed through a curved cable tube of the type you used to see on V brakes.

A neat gauge helps simplify suspension sag setup:
Russell Burton

A neat gauge helps simplify suspension sag setup

As impressive as it is to see how it all dovetails together, this type of assembly is expensive to produce. This leaves less money for the rest of the bike, and the lower that price point the closer you get to compromising performance.

Initial impressions are good. The 740mm bars are wide for a 110mm travel bike and hint at more control and chaos taming potential than you’d expect in this category. An 80mm stem keeps it easy to hold online on climbs or under power. It syncs well with the 69.5-degree head angle, not too fussy when you’re just cruising, but well weighted on turn-in if you need to whip it round something tight.

Despite the cross-country intentions, the cockpit setup is modern:
Russell Burton

Despite the cross-country intentions, the cockpit setup is modern

But where a lot of cheaper full sussers suffer is the quality of the damper and fork, and at first we thought that GT had fallen into the trap. The X-Fusion rear shock was stubborn to move and if it did it then cannoned back to a savage metal on metal top out.

Hoping it was an irregular issue with our shock rather than a cost-cutting compromise too far we contacted X-Fusion in the US who came back with a simple ‘try this’ fix. Just unscrewing the air sleeve then popping it back on recharged the negative spring chamber that helps get the shock movement started and prevents it bouncing all the way back to the stops. With that back to correct pressure and an extra bit of Slick Honey grease around the seal head we were back in supple, consistently controlled business.

Ride and handling: stiff, smooth and grounded

Despite the complex looking suspension architecture and QR skewers rather than thru-axles, this is an impressively stiff frame in terms of twist and lateral flex. The relatively short 110mm travel enables GT to sit the Helion low to the ground, and together with the well-balanced suspension weight it’s a naturally grounded feeling bike.

The lack of thru-axle doesn’t stop the raidon fork working well:
Russell Burton

The lack of thru-axle doesn’t stop the Raidon fork working well

This secure feeling is enhanced by its unique suspension action. The key feature in terms of suspension reaction is that the joint between the front and rear frame halves is higher than a normal suspension setup. It can swing backwards and upwards away from a frontal impact (boulder/step/root) much more easily than a low pivot.

Normally this increased backward and upward swing would cause massive pedal pull back, directly opposing suspension function and power transmission and causing pedal bob or choke problems. However, the cranks are mounted on a separate section, sitting between the central mainframe pivot and the junction with the rear frame at the bottom. The shock is squeezed between the top of this independent section and the mainframe. Because the crank moves backwards roughly half the distance of the axle as the shock compresses the effect on pedalling and suspension is also roughly halved.

The Independent suspension reaction amplifies smoothness over small bumps and rough patches, creating a remarkably floated ride for a bike at this price. Wallop something big flat out and the back end copes much better than you’d expect for a 110mm bike. With the transmission only semi connected from the whole shock absorbing process you can keep cranking to top the speed up without getting a beating through your feet or interrupting impact composure.

The x-fusion rear shock is supple and controlled:
Russell Burton

The X-Fusion rear shock is supple and controlled

The Raidon fork coped well with a range of rubble and trouble too, which is a good job as the rear suspension tends to keep the whole bike glued down rather than waving its front wheel in the air. Add the sorted handling and rear end stiffness and the GT swerves and flows through everything from rolling singletrack to techy trails with enthusiasm and more control than you’d get from a hardtail or most suspension bikes at this price.

The constant midsection movement does create a slight ‘rubber chain’ softness underfoot when you really put the power down but the consistent trail connection gives the Helion outstanding traction. This gives it fine technical climbing ability on challenging sections and bagged it significant summits during testing despite its speed rather than grip focused WTB rubber. On smoother trails or fire roads you can engage the remote control fork and shock mounted rear lockout for a solid pedalling platform.

Climbing ability is impressive, both on smoother trails and technical ascents:
Russell Burton

Climbing ability is impressive, both on smoother trails and technical ascents

Considering the frame complexity and low cost it’s not as heavy as you’d expect. Though it’s not exactly light either, those fast rolling WTBs stoke its speed and your ego nicely so you’re not longing for a hardtail as soon as the trail heads upwards.

There are detail issues, such as Alivio shifters restricting it to a nine-speed block despite SLX derailleurs, and stuck on rather than lock-on grips (as in the online specs). But given the ride we’d be happy to overlook easily upgraded details if it was our wallet we were opening.

Source link

Remote Jobs

Processes driving nocturnal transpiration and implications for estimating land evapotranspiration

Experimental set-up

The experiment was performed in the Macrocosms platform of the CNRS Montpellier European Ecotron. This platform houses 12 identical and independent experimental units. Each unit is composed of a dome under natural light covering a lysimeter inserted in a technical room. The linear series of 12 domes is oriented east-west with two additional domes added at each extremity to eliminate any self-shading edge effects. The 30 m3 transparent domes allow for the confinement and control of the atmosphere. Below each dome a lysimeter/ technical room hosts: the soil monolith contained in a lysimeter (2 m2 area, 2 m depth), the lysimeter’s weighing strain gauges and various soil-related sensors, the canopy air temperature and relative humidity conditioning units and the air CO2 regulation. Each dome has a circular base area of 25 m2, of which 20 m2 is covered by concrete and 5 m2 central area allocated for the model ecosystems (area 2, 4 or 5 m2), height in the centre of the dome is 3.5 m. The airflow from the dome area is prevented to leak into the lysimeter room by the means of fitting metal plates and rubber seals. Such airflow (from the cooling system) is of two volumes per minute (=70 m3 min−1) creating a turbulent environment, where wind speed varies between 0.7–2.5 m s−1 in a fraction of a second and with averaged (during a few seconds) anemometer readings (Almemo 2890-9, Coalville, UK) of 0.9–1 m s−1. This led to a well-coupled canopy where no significant differences between leaf (MS LT, Optris GmbH, Berlin, Germany) and air temperature (PC33, Mitchell Instrument SAS, Lyon, France) existed (intercept = −4.3 ± 4.5 [mean ±95%CI]; slope = 1.15 ± 0.17; R2 = 0.89). The concrete is covered with epoxy-resin to prevent its CO2 absorption.

Each macrocosm was designed as an open flow gas exchange system. A multiplexer allowed for the CO2 concentrations at the inlet and outlet of each dome to be measured every 12 min (LI-7000 CO2/H2O analysers, LI-COR Biosciences, Lincoln, NE, USA). These data combined with the measurement of the air mass flow through each dome allowed for the calculation of canopy carbon assimilation (Ac). Transpiration (mass loss of the lysimeter) was monitored continuously by four CMI-C3 shear beam load cells (Precia-Molen, Privas, France) providing 3 measurements per minute. We ensured only canopy carbon (Ac) and water (Ec) balances were measured by covering the ground with a dark plastic cover that prevented flux mixing. This plastic cover was sealed to the fitting metal plates and not to the lysimeter upper ring. There was a slight over-pressure (+5 Pa) in the dome and a small proportion of the well mixed air canopy could be passing around the plant stems, therefore flushing the soil respiration and evaporation below the plastic sheet and into the lysimeter room.

The dome was covered by a material highly transparent to light and UV radiation (tetrafluoroethylene film, Dupont USA, 250 μm thick, PAR transmission 0.9) and exposed to natural light except during the reduced radiation experiments. Here, an opaque fitted cover (PVC coated polyester sheet Ferrari 502, assembled by IASO, Lleida, Spain) was placed on each dome and a set of 5 dimmable plasma lamps with a sun-like spectrum (GAN 300 LEP with the Luxim STA 41.02 bulb, Gavita Netherlands), allowed to control radiation. The plasma lamps were then turned off to study dark circadian regulation of stomatal conductance. Our conditions may differ from a cloudy day in that radiation was direct, not diffuse. We were interested in testing how reductions in carbon assimilation affect nocturnal transpiration, therefore, avoiding diffuse radiation was considered advantageous because it increases carbon uptake31.

Bean and cotton were planted in rows, one month before the start of the measurements and thinned at densities of 10.5 and 9 individuals m−2 respectively. Six macrocosms were assigned to each species and each individual experiment measuring campaign lasted for 3–4 days. The experiments under constant darkness lasted for 30 hours and we used lysimeter weight readings from three macrocosms per species (six per species in all the other reported experiments). In the three other macrocosms researchers were entering every 4 hours to conduct manual leaf gas exchange measurements at three leaves per dome (LI-6400, LI-COR Biosciences, Lincoln, NE, USA). At the time of measurements, bean and cotton were both at the inflorescence emergence developmental growth stage (codes 51–59 in BBCH scale32).

The soil was regularly watered nearly to field capacity by drip irrigation, although irrigation was stopped during the few days of each measuring campaign in order not to interfere with water flux measurements. No significant differences (at P < 0.05, paired t-test, n = 3) in predawn leaf water potential occurred after a few days of withholding watering. This indicates that no effect of potential changes in soil moisture on plant water status over the course of the experiment.

Statistical analyses

Transpiration was calculated from the slope of the linear regression between lysimeter weight and time every 3 hours successive periods. Statistical analyses of temporal patterns were then conducted with Generalized Additive Mixed Model (GAMM) fitting with automated smoothness selection33 in the R software environment (mgcv library in R 3.0.2, The R Foundation for Statistical Computing, Vienna, Austria), including macrocosms as a random factor and without including outliers (values above 95% quantile during day or night). This approach was chosen because it makes no a priori assumption about the functional relationship between variables. We accounted for temporal autocorrelation in the residuals by adding a first-order autoregressive process structure (nlme library34). Significant temporal variation in the GAMM best-fit line was analysed after computation of the first derivative (the slope, or rate of change) with the finite differences method. We also computed standard errors and a 95% point-wise confidence interval for the first derivative. The trend was subsequently deemed significant when the derivative confidence interval was bounded away from zero at the 95% level (for full details on this method see35). Non-significant periods, reflecting lack of local statistically significant trending, are illustrated on the figures by the yellow line portions and significant differences occur elsewhere.

Differences in the magnitude of total transpiration and in canopy conductance for each species under the different radiation environments were calculated from mixed models that included radiation as a fixed factor (and hour also for canopy conductance) and macrocosms and day of measurement as random factors (each measuring campaign lasted 3–4 days).

Source link

Remote Jobs

SSH From the Ground Up

If you work professionally in the IT industry, chances are you’ve been using OpenSSH for a long time now for your day to day work.

OpenSSH however provides so much more than “just” remote shell on *nix system (and apparently on Windows too now!) and in this article we’re going to explore some of the non-immediate uses of ssh and introduce a few accessory tools that make using ssh even better.

Conventions for the Examples

We need to set some terminology to avoid confusion down the road, in particular we need:

  • A client host, it will be identified simply as local, in the examples local$ will identify the shell prompt, you won’t have to copy that part of the example to try it out.
  • A few remote hosts, which will be identified as In case we’re gonna need multiple remote hosts, a numeric index will be added. The prompt convention applies in this case as well.
  • Users will simply be local_user and remote_user. In case we need more users in some example, the numeric index will be added, similar to what we’ll use for the remote hosts.

The output for the examples are mostly taken from an OS X installation, but barring minor differences in default paths (for example OSX puts the home directories under /Users instead of the more common /home in linux distributions) everything else stays the same.

The Basic Stuff

Connecting to a Remote Host

Lets start small and build up on every new concept, we’re going to assume that SSH is installed on both the client and the server. For instruction on how to do so, please refer to your OS documentation. Expert users might want to skip the first couple of sections and jump right to the meaty bits.

To connect to a remote host just fire up your terminal emulator and simply connect:

`local$ ssh -l remote_user`

or more simply:

local$ ssh

ssh will perform its key exchange and set up the encrypted connection, ask for a password and if everything checks out, it will let you in and will present the remote server prompt. At that stage it will be like being in front of the terminal emulator of the remote server so you can start to work away.

If your local user and remote users have the same name you can skip the username in your ssh invocation.

Configuration, Part One

Realistically, you do not want to have to type the fully qualified hostname every time you connect on a machine, so it is possible to define short names through the ssh client configuration.

On *nix system, the client configuration generally resides in two places:

  • The global ssh configuration in /etc/ssh/sshd_config
  • The user configuration in $HOME/.ssh/config

The format and the options are the same for both files as detailed in the ssh_config manual page.

Let’s take a look at a simple configuration:

    Host remote
    User remote_user

With these two lines in place you can just ssh remote. The ssh client will read the configuration and connect to In case the DNS record for the remote host is not set up, an IP address can be specified in the Hostname directive.

Public Key Authentication

By default, ssh is going to accept password based authentication, but better authentication with asymmetric keys is available and provides better security than passwords.

The increased security is given not only by a greater entropy in the generated keys when compared to a password, but most importantly by the fact that the private part of the key is never sent on the network.

Let’s generate a new ssh key using the ssh-keygen tool

local$ ssh-keygen
Enter file in which to save the key (/Users/local_user/.ssh/id_rsa): /Users/local_user/.ssh/blog_key #replace this with your own path or leave the default
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/local/.ssh/blog_key.
Your public key has been saved in /Users/local/.ssh/
The key fingerprint is:
SHA256:mA/0ip4RUojNRtJK0aZhyJr7HaS1aOExLZzcqApeqC0 local_user@local
The key's randomart image is:
+---[RSA 2048]----+
|+o+              |
|.@.+             |
|==O=. .          |
|+oX.=. +         |
| +oX..+ S        |
|+.=ooo +         |
|=+..o.. .        |
|E.o..o           |
| .  o            |

As the output mentions, two files were created inside the ~/.ssh directory, a private part (blog_key) and a public part ( The public part is the one that needs to be distributed to the remote hosts.

By default ssh-keygen generates 2048 bit RSA keys, which are generally safe enough for normal use, however you might want to have a stronger key, and it can be obtained by adding the -b 4096 option which will create a 4096 bit key.

It is also possible to have different key types, such as ecdsa or ed25519, simply add the -t switch followed by the type of key you’re interested in.

The public keys need to be added to the ~/.ssh/authorized_keys on the remote hosts, lets copy it manually on our remote:

local$ cat  /Users/local/.ssh/blog_key
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDRQ6CjxzskkXQtC9H2cV9jGt8T8kmZHeLXNrt5drO0PuKeA2VvmG96m5EFCgKTR5Rmug2poFy7QvRlyuwauEjPm0cImX1VpVKNO4GTo7PmEmt1MwvkcSzMb2U5AgXIeth7lxtZ0H0jOaW3371xfyaHn0L/LXbZyFOiHkz/TreNg0Mj1FatX8nQYSNaQwD1byTAu2Z8WENJ0JY26zmbMr1hqKYXTJ5GZ8NkqK5VmW+zQ/wnOjMaKz9IchULHfedaakDUVWFY1hailOzhHU+H32gFZLFneHG1mlKQQ3P3TCWmecfG5mARLPrlamR2UvLSZin4LOi6XwSfJ4eWcUQlcg/ local_user@local

local$ ssh remote
remote$ mkdir -p ~/.ssh; chmod 700 ~/.ssh
remote$ echo "$content_of_the_public_key_file " >> ~/.ssh/authorized_keys

Done! the next ssh session to remote will not ask you the password for the remote user but instead, if set, the passphrase of the key (it won’t prompt for anything if the passphrase has not been set).

To simplify things, most linux distributions ship with an utility script called ssh-copy-id which performs exactly the same thing in the example above, but in an automathed way, simply invoke ssh-copy-id remote. On OSX this utility is missing by default but can be installed. If you have multiple keys on your system you can use the -i option, passing in the path to the key you want to install on the remote host.

“Passwordless” ssh

We now have a configuration, a public key that is installed on the remote host, but we still have to type the key’s passphrase every time we open a new connection. If you find yourself opening tons of connections to many hosts, it gets old quite fast, and one might be tempted to remove the passphrase from the key which is of course not a great idea.

Luckily the OpenSSH developers come to the rescue again with ssh-agent.
ssh-agent is a client program that lives on your local machine and acts as an authentication agent for every ssh client invocation, allowing the agent to securely store in memory an unecrypted version of your key once you add it to the agent.

To start the agent simply run:

local$ eval $(ssh-agent)

Or if you prefer, run ssh-agent and then export the SSH_AGENT_PID and SSH_AUTH_SOCK variables that ssh-agent printed out when it started.

Those two environment variables are checked by the ssh client to know where the agent lives and which socket to use

By default the agent socket is created under a temporary directory, but it’s possible to ask the agent to set it into a specific path using the -a option, which is useful to automagically reuse an agent between user sessions on the local machine. More on this later.

Once the ssh agent is running, we can add our key to it by running:

ssh-add ~/.ssh/blog_key
Enter passphrase for /Users/local_user/.ssh/blog_key:
Identity added: /Users/local_user/.ssh/blog_key (/Users/local_user/.ssh/blog_key)

And to verify that our key is present and enabled:

local$ ssh-add -l
2048 SHA256:mA/0ip4RUojNRtJK0aZhyJr7HaS1aOExLZzcqApeqC0 /Users/fpedrini/.ssh/blog_key (RSA)

At this point, a connection to our remote host will happen automatically without asking for any password, the authentication has been handled by the agent.

We’ll get back on this argument later in the article.


One of the most appreciated features of SSH is the ability to forward traffic in a variety of ways, local and remote tunnels and dynamic proxying.

Local forwarding

Lets assume that on remote there is a service listening on a specific port but only on the local interface but it would be extremely useful to be able to connect to it from the local machine. An example is trying to connect to a database instance with a local client, for this example we’re going to use PostgreSQL, but really any other service that listens on a TCP port can be exposed to the local machine using an ssh tunnel.

For example the following will not work:

local$ nc -vz 5432
nc: connect to port 5432 (tcp) failed: Operation timed out

ssh can come to the rescue with the -L option:

local$ ssh -Llocalhost:5432:localhost:5432 remote

At this stage, every TCP connection to port 5432 on the local machine will be forwarded to port 5432 on the remote host.

Unfortunately the user experience for tunnels in ssh is not quite straight forward, lets dig into the details.

-L is simply the the switch for enabling a local forwarding tunnel, the following is the definition of the tunnels which can be read as:


The the first two values refer to the local end of the tunnel, where ssh should setup a listening socket to accept incoming connections, the last two are the remote end, so where ssh needs to forward the traffic coming from the tunnel.

In particular the local_bind_address can be omitted and it’s set by default to the loopback interface of the client machine, but it is possible to specify a public interface.
Similarly we can do the same for the remote_bind_address, which can even be a third host that is reachable only from the remote host.

Following the current example on our local host we’ll be able to see a process listening on port 5432:

local$ lsof -n -iTCP:5432 | grep LISTEN
ssh     14443 local_user    9u  IPv6 0x39970c5c548f41ff      0t0  TCP [::1]:5432 (LISTEN)
ssh     14443 local_user   10u  IPv4 0x39970c5c5631601f      0t0  TCP (LISTEN)

Which means that the tunnel is established. Barring any firewall rules on remote, we should be able to perform at least a tcp connection to port 5432 on localhost, and have that connection forwarded to the remote server on the same port:

local$ nc -vz localhost 5432
Connection to localhost port 5432 [tcp/postgresql] succeeded!

It works! But what about a real PostgreSQL client connection?

local$ psql -H localhost -U dbuser
Password for user dbuser:
psql (9.4.5, server 9.4.5)
SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256)
Type "help" for help.

Of course there is no need to use the same port on both the local and the remote machine, but it often makes sense, especially when using services with clients that by default connect to a predefined port (such as in postgre’s case), otherwise you’ll have to instruct your client to connect to a different local port.

As all options in ssh command line, this is exposed also through the configuration file, the directive is called LocalForward and it accepts the same parameters as the command line, so the following configuration will always open a local tunnel for port 5432 every time a new connection to remote is started:

     Host remote
     User remote_user
     LocalForward localhost:5432:localhost:5432

Remote Forwarding

Sometimes it’s useful being able to do the opposite of a local forward, that is being able to expose a port on a remote host that leads right back to a port on the local machine.
This is especially useful when the local machine is behind a firewall and it’s not directly exposed on the Internet. Of course “evading” the firewall through ssh to expose non whitelisted ports might be in violation of company policies, so please be careful when using this in a professional setting :).

The option to enable remote forwarding works pretty much in the same way as the local forward, but the arguments are swapped, the first bind_address:port pair refers to the remote machine and the second one to the local machine.

For this example we’re going to try something that might seem weird at first, but has it proven to be a lifesaver in more than a few occasions.

Specifically I used this trick time and time again to access my home network, which is sadly behind a NAT outside my control. In this case remote is a server which is remotely accessible through SSH.

local$ ssh remote -R localhost:1234:localhost:22

Great! now leave the session open and from a shell on the remote box (either through ssh or directly on the remote‘s console) try an ssh connection to localhost on port 1234:

remote$ ssh localhost -p 1234
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 50:22:3d:bc:a7:02:45:e1:a0:1e:df:38:0f:85:6f:f9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.

As you can see, remote was able to access local which is behind a NAT.
Of course all this works as long as the initial connection set up by local stays up, but using a tool such as autossh can help in ensuring a reliable connection.

It is also possible to tell the remote that other hosts can access the forwarded port, but for that you’ll need to modify the ssh server on remote to allow it. The option is called GatewayPorts and by default is turned off, for a number of valid security reasons. It might totally be fine if the remote is only accessible through a trusted network, but I would not recommend this solution, the security risks are just too high and there are better ways to achieve the same result (such as a properly set up VPN server).

Similarly to what happens for local forwards, we can configure ssh to open a remote forward on every connection using the RemoteForward configuration option.

Again, please be careful when using this in a professional setting, it might be considered unauthorized access and cause you lots of troubles.

Dynamic Proxying

Building on top of local and remote forwards, SSH is also able to act as a dynamic proxy, which can be considered sort of a poor’s man VPN. With an ssh dynamic proxy set up (and your browser configured to use it), all your connection to websites will be routed through the remote box.
This is as simple as running:

local$ ssh remote -D 8080

The ssh client will start listening locally on port 8080 (or whatever port specified after -D) and expose a SOCKS Proxy, the next step would be to configure the browser to use it.
There are too many browsers and extensions to manage proxy setup to provide a meaningful overview, refer to your browser documentation for it, just know that the type of proxy is SOCK5 and the address will be localhost:$PORT.

Connection multiplexing

SSH is able to set up connection multiplexing, creating a local socket for each host and reusing it in case a new session is open to the same box. This provides some speedups in the setup of the connection, or more precisely does not set up a connection at all, it just communicates to the remote ssh server to open a new user session with the same parameters as the already established one.

This requires a directory on the filesystem where the socket will be created, I personally use ~/.ssh/sock but any directory owned by your user and with restricted permissions (0700) will do.

local$ mkdir -p ~/.ssh/sock
local$ chown 700 ~/.ssh/sock

Then edit the configuration and add the following:

    Host *
    ControlMaster auto
    ControlPath ~/.ssh/sock/%r@%h:%p

The first line is just the definition of the ‘ssh host’, but instead of using an identifier it uses a pattern, in this case this configuration will be applied to all hosts.
It is possible to restrict the pattern (such as * or just specify these options for a single host.

The ControlMaster line, instructs ssh to try and reuse a master connection if it exists and create one in case it does not.

ControlPath instead specifies where the socket file must be created. The weird part at the end of the paths are placeholders, specifically in this example:

  • %r is the remote user
  • %h is the remote host
  • %p is the remote port

which means that invoking ssh remote will create ~/.ssh/sock/remote_user@remote:22.

There are many options for substitution documented in the ssh_config manual page. It is recommended to at least specify the three placeholders mentioned here, which should suffice to uniquely identify a connection to a host.

Bastion Hosts and ProxyCommands

It is normal practice to expose a remote infrastructure through bastion hosts, that is a server which exposes ssh on the internet and allows access to the servers behind it, which are not exposed. In the example I’m going to refer to this host simply as bastion.

Lets say that we have a bunch of servers hosted somewhere and exposed through, and we want to access a remote behind it. A way to do it is to simply connect to the bastion and from there start a connection to the destination host:

local$ ssh
bastion$ ssh

But again, it is something that gets old really fast, luckily ssh provides a ProxyCommand option which helps in automating the process. ProxyCommand is just a generic way to override the default connection mechanism in ssh and pipe it through an external command (which can be ssh itself, as in the following example).

To enable the use of a bastion host through ProxyCommand add something similar to your ssh configuration:

    Host *
    ProxyCommand ssh -W %h:%p

%h and %p have the same meaning as before, remote host and remote port (they’re in fact default placeholders that are recognized by every option that deals with hosts) and the -W option tells the ssh client that it should forward standard input and standard output to the remote host.

Assuming the bastion and the remote hosts are already set up for key authentication as described earlier, ssh will automatically connect to the bastion host, which will open a connection to remote and forward standard input and output to it.

There are also lots of neat tricks you can do with ProxyCommands since ssh just executes whatever command you specify in the configuration as a normal process and it even goes through variable expansions.

For example, having to specify for every ssh connection gets in the way after a while, so I have set up my ssh configuration to access my own personal servers behind a bastion host in this way:

    Host *.home
    ProxyCommand ssh -W $(basename %h .home):%p

And I invoke ssh using ssh host.home. basename will strip the .home suffix and forward stdin/stdout from the bastion host to host. The bastion is set up to use the local DNS search path, so that I can use non fully qualified hostnames inside my home network.

Another example is to route SSH connections over TOR:

local$ ssh -o "VerifyHostKeyDNS=no" -o ProxyCommand="nc -X 5 -x localhost:9150 %h %p"

In this example we’re using netcat to proxy the ssh connection to the Tor server listening on port 9150 and we’re disabling the Dns host key verification to avoid any potential leak of information outside the Tor network.

SSH Escape Characters

Often times it happens to me to open a quick ssh session to verify something or run a couple of quick commands… Fast forward 2 hours later I’m still connected to the box, typing furiously on the keyboard with obscure environment variables set,a few background processes running and of course none of them through tmux or screen (it was a five minute job after all), and it’s now 3AM in the night and I notice that I really need a local tunnel to complete what I’m doing… But I don’t want to lose any of the processes I have running nor do I want to mess around with things like disown and retty

To open a local tunnel I’d have to disconnect, lose everything in the background, reconnect with the forwarding enabled and re-setup my environment, which is painful (especially at 3AM when these sort of things happen to me)… Luckily the OpenSSH developers thought of such possibility and they have introduced a bunch of escape sequences in the ssh client.

The ssh escape sequences are prefixed by a tilde (~) character, and the ~? displays the help for all the escape characters:

local$ ssh remote
remote$ ## press in sequence <ENTER>~?
Supported escape sequences:
 ~.   - terminate connection (and any multiplexed sessions)
 ~B   - send a BREAK to the remote system
 ~C   - open a command line
 ~R   - request rekey
 ~V/v - decrease/increase verbosity (LogLevel)
 ~^Z  - suspend ssh
 ~#   - list forwarded connections
 ~&   - background ssh (when waiting for connections to terminate)
 ~?   - this message
 ~~   - send the escape character by typing it twice
(Note that escapes are only recognized immediately after newline.)

The escape sequences I use the most are ~C and ~# and sometimes when I’m dealing with unresponsive clients ~. which terminates the connection comes in handy as well.
Worth noticing that escape sequences are recognized only after a new line.

The beauty about ~C is that it allows to open and remove port forwards without having to break connection. In the command line the format is exactly the same as the client options, lets see a practical example:

Lets start a connection with no forwards enabled and then enable a local forward to port 5432 (similar to the earlier example):

local$ ssh remote
remote$ ## type <ENTER>~C
ssh> ?
      -L[bind_address:]port:host:hostport    Request local forward
      -R[bind_address:]port:host:hostport    Request remote forward
      -D[bind_address:]port                  Request dynamic forward
      -KL[bind_address:]port                 Cancel local forward
      -KR[bind_address:]port                 Cancel remote forward
      -KD[bind_address:]port                 Cancel dynamic forward

remote$ ## type <ENTER>~C again
ssh> -L 5432:localhost:5432
Forwarding port.

As you can see from the help message of the built in command line, it is possible to enable all kinds of forwards, using the same syntax that you would use on a normal ssh invocation.

Some Tips and Tricks

Now that we went through various features, lets see how to put them to use with a few tips and tricks.

Copying Files

SSH comes with a tool for remote file transfer called scp which uses the ssh transport protocol behind the scenes and it’s able to use the same options specified in the ssh configuration.
scp tends to mimic cp for remote files, the basic example is the following:
scp local_file remote:/remote/path

The file will be transferred and to the remote host in the path specified after the : character. It is also possible to preserve permissions and access and modification times with the -p switch.

But what if you want to copy files between two remote hosts? Just specify two remote servers one as the source and the other as destination:

scp remote1:/path/to/source/file remote2:/path/to/destination

Running One Shot Commands

With ssh it’s also possible to run one shot commands without leaving a shell open, for example:

local$ ssh remote uptime
 16:25:04 up 15 days,  6:28,  0 users,  load average: 0.00, 0.01, 0.05

Will show the output on uptime on the remote host.

This is especially useful to run the same command on a small set of hosts, for example to obtain the uptime output for 10 remote hosts (remote1 to 10 in the example below) one might use something similar to the following:

for i in $(seq 1 10); do ssh remote${i} uptime;done

The output will consist of 10 uptime outputs, one per host. This also means that you can pipe the output from ssh to other commands using the normal unix pipes.

It is also possible to run complex commands through ssh, but to avoid problems with pipes and quotes that will inevitably get you confused when the local shell tries to interpret some of them, I prefer to put the set of commands in a text file and then invoke ssh in this way:

ssh remote $(cat command_file.txt)

The file will be read by the shell (through command substitution) and passed in to ssh as an argument, which will forward the commands to the remote host.

Automatically Starting a Screen (or tmux) Sessions Or Reattach to An Existing One

Given that we can run “one shot commands”, we can easily extend this to connect to a host and automatically starting a screen session:

local$ ssh remote screen -xRR

or with tmux:

local$ ssh remote tmux a -d

Of course if you use named sessions, do not forget to add the session names to the command.

Fixing Remote Host Verification Failed Messages

Every time you connect to a new host, ssh records a unique cryptographically secure fingerprint presented by a server, and it checks that the key presented by the remote is the same as the recorded one. If that’s not the case ssh halts the connection to avoid falling for a Man In The Middle attack.

However it happens that a remote host is rebuilt, or for some reason the server key is regenerated, especially when working with cloud infrastructures, where capacity is elastic and hosts are rebuilt frequently.

If you’re sure that the host key was changed and you still want to connect, you can remove the old key from the known_hosts file by using the -r switch of ssh-keygen:

local$ ssh-keygen -r
# Host found: line 103
/Users/local_user/.ssh/known_hosts updated.
Original contents retained as /Users/local_user/.ssh/known_hosts.old

Agent Forwarding and ssh-ident

Earlier in the article we saw how ssh agent holds your keys securely in memory on your local machine, but it is also possible to export the agent to a remote host and be able to use the keys from your agent to log on other remote hosts:

local$ ssh remote -A
remote$ ssh-add -l 
2048 9c:e3:59:71:13:75:fb:24:e8:39:dd:ba:a0:3e:33:67 /Users/fpedrini/.ssh/blog_key (RSA)

-A is equivalent to specifying ForwardAgent yes in the ssh config.
If we hadn’t specified -A in the ssh invocation, the result would have been an error mentioning the fact that there are no ssh-agents instances accessible on the remote host.

SSH agent forwarding however, comes with a major security drawback. Agent forwarding works by creating a socket on the remote host that is used to communicate (through the ssh channel) with the agent on the local machine. This socket is created under /tmp and is accessible only by the remote user and root.

This means that if there is any attacker (or playful colleague?) that has root access to the remote machine he will have access to all your ssh identities for the duration of your ssh connection.
Needless to say if you have multiple ssh keys for different kind of hosts, the attacker will have access to all of them which is not the best of things…

A better solution would be to be able to have multiple ssh keys (one per environment/system/customer whatever makes most sense) and being able to forward only the relevant identity, this way the risk is not completely eliminated but the effect of a breach of the forwarded agent is drastically reduced.

ssh-ident is designed for exactly this purpose, it’s an ssh-agent manager, that spins up agents and loads identities on demand the first time they’re needed (and by default unloads them after a set timeout). ssh-ident is a wrapper around ssh, to use it just set an alias that overrides it:

alias ssh='/path/to/ssh-ident

Then call ssh as normal. ssh-ident will spin up an ssh-agent and by default try to load the default key:

local$ ssh remote
Loading keys:
Enter passphrase for /Users/local_user/.ssh/id_rsa:
Identity added: /Users/local_user/.ssh/id_rsa (/Users/local_user/.ssh/id_rsa)
Lifetime set to 7200 seconds

At this point setting up new identities is a matter of creating as many keys as you need and saving them in ~/.ssh/identities/, here is a quick example with two keys, one for “work” and one for “home”:

mkdir -p ~/.ssh/identities;chmod 700 ~/.ssh/identities

ssh-keygen -f ~/.ssh/identities/work
ssh-keygen -f ~/.ssh/identities/home

And drop the following in the ssh-ident configuration file (~/.ssh-ident):

    (r"home", "home"),
    (r"", "work"),

This configuration tells ssh-ident which identity to use for hosts that mache ‘home’ or ‘work’. There are multiple ways to specify which identity to use (and also lots of ways toadd ssh-options on the fly) as mentioned in ssh-ident’s README, and every user should find its own way to map identities to hosts.

Lets see what happens if we ssh to a ‘home’ host without ssh-ident, for example purposes I’ve already setup a local agent and added both keys generated previously:

local$ /usr/bin/ssh remote.home
remote$ ssh-add -l
2048 de:df:85:53:9d:87:cb:c7:c1:17:f7:10:7a:41:e0:54 /Users/local_user/.ssh/identities/work (RSA)
2048 9c:e3:59:71:13:75:fb:24:e8:39:dd:ba:a0:3e:33:67 /Users/local_user/.ssh/identities/home (RSA)

As you can see, both ssh keys are forwarded as expected. Which means that I’m technically leaking a work related key on my home network, and conversely when I connect to a host at work I’m leaking my home key.

Now let’s try with ssh-ident:

local$ ssh remote.home
Loading keys:
Enter passphrase for /Users/local_user/.ssh/identities/home:
Identity added: /Users/local_user/.ssh/idenities/home (/Users/local_user/.ssh/identities/home)
Lifetime set to 7200 seconds
remote$ ssh-add -l
2048 9c:e3:59:71:13:75:fb:24:e8:39:dd:ba:a0:3e:33:67 /Users/local_user/.ssh/identities/home (RSA)
remote$ logout
local$ ssh
Loading keys:
Enter passphrase for /Users/local_user/.ssh/identities/work:
Identity added: /Users/local_user/.ssh/idenities/work (/Users/local_user/.ssh/identities/work)
Lifetime set to 7200 seconds
remote$ ssh-add -l
2048 de:df:85:53:9d:87:cb:c7:c1:17:f7:10:7a:41:e0:54 /Users/local_user/.ssh/identities/work (RSA)

Quite neat!

Of course the forwarded key is still exposed to all users that have root access on the remote machine, but at that point an eventual attacker will be able to compromise only one key instead of all the identities in your ssh agent.


This article got quite long, but it just manages to scratch the surface of the almost countless ssh options and weird stuff that might be done with it, but hopefully gives enough information to increase productivity and security on the day to day life.

For any other option, I strongly recommend reading the wonderful manpages that comes with ssh, there are plenty more gems in there to be discovered and the number of useful features grows with every OpenSSH release.

originally writtin by Francesco Pedrini

Source link

Remote Jobs

Employees are quitting over long commutes, but remote work could keep them around

Lengthy drives to work have caused 23% of employees to quit their jobs, with Chicago, Miami, New York, and San Francisco having the worst commutes.

More than one in five employees quit their jobs because of painful commutes to work, according to a Robert Half survey on Monday. Out of the 28 major US cities surveyed, Chicago, Miami, New York, and San Francisco had the most resignations because of bad commutes, said the press release.

The report surveyed more than 2,800 US workers aged 18 or older within the 28 cities mentioned. Younger professionals (ages 18-34) had the highest rate of resignations due to their commute length, according to the release.

SEE: Telecommuting policy (Tech Pro Research)

Some 22% of employees surveyed said their commute has gotten much worse over the past five years, said the release. Seattle, Denver, Austin, and San Francisco were cited as the cities where workers felt their commute had gotten worse.

On the other hand, workers in Miami, Los Angeles, New York, and Charlotte have seen the largest improvement in their commutes over the past five years, added the release.

“Commutes can have a major impact on morale and, ultimately, an employee’s decision to stay with or leave a job,” said Paul McDonald, senior executive director for Robert Half, in the release. “In today’s candidate-driven market, skilled workers can have multiple offers on the table. Professionals may not need to put up with a lengthy or stressful trip to the office if there are better options available.”

Of those who said their commute has gotten worse, 60% said their companies haven’t taken any steps to relieve the burden. If bad commutes are a common problem at a company, then management must step in and help if they want to retain their employees.

Organizations could provide remote work as an alternative option to navigating a bad commute. In fact, remote work is becoming more of a norm at companies, with many workers claiming to be more productive and focused when working out of the office.

Check out this TechRepublic article for tips on how to manage a remote workforce.

The big takeaways for tech leaders:

  • Some 23% of US employees quit their jobs because of bad commutes to work. — Robert Half, 2018
  • If companies are based in cities with notoriously bad commutes, then they should consider offering alternatives, like remote work, if they want to retain their employees.

Also see


Image: iStockphoto/sabthai

Source link

Remote Jobs

Sainsbury’s announce huge toy sale with 50% off most toys – and it’s perfect for Christmas presents

1. Get paid to shop

Cashback websites like TopCashback or Quidco will offer you money to shop through them online. The sites are free and safe to use – and if you’re a new member, you really can reap the rewards.

They operate through referrals. If you visit a store and make a payment through one of their websites, the retailer will pay them as a reward, and subsequently, they’ll pay you a cut of that payment.

It’s easy to sign up to – all you have to do is enter basic details to create an account. Once you’re in, you’ll be able to start shopping – avoid logging out of your account as the website will need to track your payment.

2. Ask for vouchers

If there is a special offer in a supermarket but the item is out of stock, you can ask in store customer services for a voucher offering you either a similar product at the same price, or a coupon to take advantage of the offer at a later date when it’s back in stock.

This is not something you are entitled to, however, ask nicely, and in most cases, you won’t be refused.

3. Buy reduced items

As products approach their expiry date, supermarkets will start to stash costs in a bid to recoup as much as they can before the item is taken off the shelf. This is great news for the consumer, as it means you’ll be able to snap up groceries for a fraction of the costs – we’ve spotted bread down to 10p in the past.

This is easier said than done though. Each store will have its reduced hour – a period of the day in which they’ll scour the shelves for products approaching shelf life and reduce the cost – usually around 5-7pm – and many shoppers will know about it, so prepare to be quick.

Remember, if items are close to their sell by date, you can just put them in the freezer for later use.

4. Buy own brand basics

There are certain cupboard items that just don’t taste the same as own brand, like Heinz baked beans for instance. But, on the other hand, there are many basics that you CAN switch to and make a saving without even noticing the difference.

These include kitchen roll, salt, sugar, chopped tomatoes and most cleaning or household products.

5. Use loyalty rewards

Loyalty schemes are designed to reward regular customers and keep them coming back for more. This is often achieved through exclusive deals, discount codes, coupons and cash off once you’ve accumulated a certain amount. Don’t be too loyal though – as you’ll end up missing out on deals elsewhere.

You can collect points when you buy in-store, which you can use as a discount on your next purchase. Note, loyalty schemes are not a substitute for a good deal and will not save you money if you go back to the store without shopping around first.

6. Don’t be fooled by discounts

As tempting as these may look, not all ‘bargains’ are as good as they seem. Look closer, and you’ll spot deals like ‘2 for £2’ on items that cost just 95p each. Don’t be fooled by such trickery – always do your maths.

Source link