Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The first request to disambiguate is slow and also memory is growing as more requests are coming in #158

Open
meganeinu7 opened this issue Apr 13, 2023 · 1 comment

Comments

@meganeinu7
Copy link

I am wondering if this is caused by my configuration or something else.
We are testing entity-fishing disambiguate model in Kubernetes 1.24.
Using grobid/entity-fishing:0.0.6 image for testing and we followed the instruction in here

e.g.

curl 'http://localhost:8090/service/disambiguate' -X POST -F "query={ 'text': 'The army, led by general Paul von Hindenburg defeated Russia in a series of battles collectively known as the First Battle of Tannenberg. But the failed Russian invasion, causing the fresh German troops to move to the east, allowed the tactical Allied victory at the First Battle of the Marne.', 'processSentence': [ 1 ], 'sentences': [ { 'offsetStart': 0, 'offsetEnd': 138 }, { 'offsetStart': 138, 'offsetEnd': 293 } ], 'entities': [ { 'rawName': 'Russian', 'type': 'NATIONAL', 'offsetStart': 153, 'offsetEnd': 160 } ] }"
  1. the very first request to the service is talking close to 30 seconds. In this case below, it took 25 seconds.
    Any subsequent requests after that take less than 100 ms.
    We are hitting the service using readinessProbe to make sure the service is available, but after the pod is ready, the first request either from outside or inside of the docker image is taking a long time.
[0:0:0:0:0:0:0:1] - - [13/Apr/2023:17:03:42 +0000] "POST /service/disambiguate HTTP/1.1" 200 205 "-" "curl/7.74.0" 1699                                                                                                                           │
│ [0:0:0:0:0:0:0:1] - - [13/Apr/2023:17:03:53 +0000] "POST /service/disambiguate HTTP/1.1" 200 2017 "-" "curl/7.74.0" 25135                                                                                                                         │
│ [0:0:0:0:0:0:0:1] - - [13/Apr/2023:17:03:56 +0000] "POST /service/disambiguate HTTP/1.1" 200 203 "-" "curl/7.74.0" 39                                                                                                                           
  1. the memory of the server is consistently growing as more requests are coming in.
    e.g. it is currently 21% but continue to go up and crash.
top - 17:17:59 up 1 day, 0 min,  0 users,  load average: 0.00, 0.01, 0.04
Tasks:   5 total,   1 running,   4 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.1 us,  0.1 sy,  0.0 ni, 99.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
MiB Mem :  15432.3 total,    430.2 free,   3694.9 used,  11307.3 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.  11407.6 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
    1 root      20   0 9632572 449000  29848 S   0.0   2.8   0:14.44 java
  116 root      20   0   80.4t   3.2g 444440 S   0.0  21.0   1:09.66 java
  145 root      20   0    6348   4116   3304 S   0.0   0.0   0:00.01 bash

Any insight on these two issues would be appreciated!

@cybernic
Copy link

Hi, I can confirm that over time, the service loses performance.

entity-fishing

Any solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants