0

I've created a Linux VM on Azure and there is a Postgres service on it. The VM runs well, but I experience disconnects.

The VM disconnects SSH sessions - they are just get broken - just like the connections to Postgres.

The Postgres DB is a little development DB having about 20 tables with 20-30 records each. The VM has nothing to do.

I didn't experience such behavior before. I recreated this VM from scratch several times, tried the next region to me - Amsterdam (being in Germany) and also Dublin - in both regions is the situation the same.

What could be a problem?

Alexander
  • 1
  • 1
  • Are the ssh sessions being actively used? Depending on your ssh client and the config of the server, idle conncetions get disconnected automatically after a defined period of time. What connection type are you using, could it be that your internet connection gets reset by getting a new ip address (given possible dsl in germany)? – cora Nov 09 '19 at 17:16
  • Using WinSCP for SSH and PGAdmin 4 for manipulating the Postgres-DB. PGAdmin disconnects after 5 minutes of inactivity; WinSCP too. Using the same image on a local network and accessing it from another local network via VPN - no problems at all. I'm not getting a new IP address; having a static one on a thick 300 MBit channel... – Alexander Nov 09 '19 at 19:06

1 Answers1

0

Edit /etc/sshd_config which is the server side configuration file add these two options if you want to prevent all your clients from disconnecting:

ClientAliveInterval 120

ClientAliveCountMax 720

The first one configures the server to send null packets to clients each 120 seconds and the second one configures the server to close the connection if the client has been inactive for 720 intervals that is 720*120 = 86400 seconds = 24 hours

On the client side you can also include

Host *

ClientAliveInterval ...

ClientAliveCountMax ...

ServerAliveInterval ...

You can look at https://unix.stackexchange.com/a/3027 for a more detailed explanation.

Another case could be that you need to configure your idle time out on Azure Load Balancer; you can read more at https://azure.microsoft.com/en-us/blog/new-configurable-idle-timeout-for-azure-load-balancer/

PowerShell Examples

Configure TCP timeout for your Instance-Level Public IP to 15 minutes.

Set-AzurePublicIP –PublicIPName webip –VM MyVM ldleTimeoutInMinutes 15

IdleTimeoutInMinutes is optional. If not set, the default timeout is 4 minutes. Its value can now be set between 4 and 30 minutes.

Set Idle Timeout when creating an Azure endpoint on a Virtual Machine

Get-AzureVM -ServiceName "mySvc" -Name "MyVM1" | Add->AzureEndpoint -Name "HttpIn" -Protocol "tcp" -PublicPort 80 -LocalPort 8080 -IdleTimeoutInMinutes 15| Update-AzureVM

wja
  • 1
  • 2
  • Thanks, will give it a try, on a server site. Strange, that such tricks were not necessary 4 or 5 months ago... – Alexander Nov 09 '19 at 21:25
  • Take a look at https://azure.microsoft.com/en-us/blog/new-configurable-idle-timeout-for-azure-load-balancer/ as it can also apply to your case. – wja Nov 10 '19 at 00:52
  • The parameters `ClientAliveInterval 120` and `ClientAliveCountMax 720` brought a kernel panic after 24 hours and I've just disabled them. – Alexander Nov 10 '19 at 18:19
  • I will give a try for IdleTimeout (it may also be set on the portal). The default setting is 8 minutes; will increase it to 15 minutes. But unfortunately I don't think, it will help. AFAIK WinSCP, Putty and PGAdmin do send keep-alives on their own. And these keep-alives don't prevent the disconnects, unfortunately. – Alexander Nov 10 '19 at 18:26