What is best way to add two of log file from my Ma...
# support
n
What is best way to add two of log file from my Macbook pro to signoz POC I down loaded the log file from HTTP server we are not allowed install signoz agent I would like to know if POC can be done like that I already have Signox running on my Macbook pro for POC . Thanks
v
@nitya-signoz Can help
n
Hello Nitya Thanks will this work
Copy code
receivers:
  filelog:
include: [ /Untiled/Users/nooramin-ali/Downloads/error_log
Copy code
]
    operators:
      - type: regex_parser
        regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%d %H:%M:%S'
        severity:
          parse_from: attributes.sev
I can run script in Signoz diretory on the same mac
Do I need to run script in docker desktop please forward me to location to run this script . Thanks
Also please check if my path is correct too
I think I figured it out I will try it now
I have saved the logs here
Please let me know if it will work
Any help will be help full to make this POC successful
I made changes to the log files as seen
will the show up on Signox using your script . Thanks
I have restarted the signoz how long till I see these logs in the front end .
Here is restart for signoz
Please let me know if I have done it correctly
I did not get reply back from your team member can you reply to my slack by Friday please. Thanks
Do you why I am un able to login into Signoz I get the following error something_went_wrong
n
How are you running your otel collector ? If you are running it in docker, then you will have to mount these files to otel-collector.
n
I had to uninstall and reinstall everything Nitya
I am using local host to log in it will not let me log in any longer
Nitya where do I mount files in docker
n
where do I save this
Copy code
receivers:
  filelog:
include: [ /Untiled/Users/nooramin-ali/Documents/Log file/access.log
Copy code
]
    operators:
      - type: regex_parser
        regex: '^(?P<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}) (?P<sev>[A-Z]*) (?P<msg>.*)$'
        timestamp:
          parse_from: attributes.time
          layout: '%Y-%m-%d %H:%M:%S'
        severity:
          parse_from: attributes.sev
in order for the logs to show up on Signoz we are doing POC without the Agent install
I am also unable to open signoz running on my mac with IP address I get following error
n
you will have to mount your file to otel-collector. then you have to add the path to the mounted file in
include:
and save the receiver here https://github.com/SigNoz/signoz/blob/6fb071cf377a9662d063b082ca20c73db65cbec3/deploy/docker/clickhouse-setup/otel-collector-config.yaml Once done, restart your otel collector.
I am not sure about the ip, isn’t localhost working ?
n
I will add the path file first . I will try trouble shoot the local host Ip address issue I had reboot my mac yesterday evening after that local host via ip address stop working I am not able log in to sig noz front end at all. Thanks Nitya
Hello Nitya please let me know if I did this correctly .
Also can you please send best way to restart collector I restarted my Mac also to fix local host issue where i cannot log into Signoz front end any longer to view the logs any reason why
Now I am able see the local host and front end us fine all is good I can see the alerts, logs, and dashboard when you say restart your otel collector.What is command for it or how I do it please . Thanks for your help I am so close to my POC is now . Thanks
is this command to restart systemctl restart otelcol-contrib
or this the right command sudo service otelcol restrate. where do I run this command please
I hope some one in your team can really make my POC success full . Every time I make change in signoz on my mac my front host crashes now I got this . It is an old version can some one please delete my login Id from your end Noor-Ul-Amin.Ali I want recreate my log ID again and see if it works. If the POC does not work they might not consider getting the product . Thanks
n
Please select a slot here , we can get on a call https://calendly.com/nityananda-1/30min
n
Let me know if this time works for you or not for today 7 am this morning. Thanks
n
Let connect on the agreed time.
n
ok I did not see phone # in meeting invite
I just saw it thanks
receivers: include: [ /Untiled/Users/nooramin-ali/Documents/Log file/access.log tcplog/docker: listen_address: "0.0.0.0:2255" operators: - type: regex_parser regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}[0 9]{2}[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?' timestamp: parse_from: attributes.timestamp layout: '%Y-%m-%dT%H:%M:%S.%LZ' - type: move from: attributes["body"] to: body - type: remove field: attributes.timestamp # please remove names from below if you want to collect logs from them - type: filter id: signoz_logs_filter expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"' opencensus: endpoint: 0.0.0.0:55678 otlp/spanmetrics: protocols: grpc: endpoint: localhost:12345 otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 jaeger: protocols: grpc: endpoint: 0.0.0.0:14250 thrift_http: endpoint: 0.0.0.0:14268 # thrift_compact: # endpoint: 0.0.0.0:6831 # thrift_binary: # endpoint: 0.0.0.0:6832 hostmetrics: collection_interval: 30s scrapers: cpu: {} load: {} memory: {} disk: {} filesystem: {} network: {} prometheus: config: global: scrape_interval: 60s scrape_configs: # otel-collector internal metrics - job_name: otel-collector static_configs: - targets: - localhost:8888 labels: job_name: otel-collector processors: logstransform/internal: operators: - type: trace_parser if: '"trace_id" in attributes or "span_id" in attributes' trace_id: parse_from: attributes.trace_id span_id: parse_from: attributes.span_id output: remove_trace_id - type: trace_parser if: '"traceId" in attributes or "spanId" in attributes' trace_id: parse_from: attributes.traceId span_id: parse_from: attributes.spanId output: remove_traceId - id: remove_traceId type: remove if: '"traceId" in attributes' field: attributes.traceId output: remove_spanId - id: remove_spanId type: remove if: '"spanId" in attributes' field: attributes.spanId - id: remove_trace_id type: remove if: '"trace_id" in attributes' field: attributes.trace_id output: remove_span_id - id: remove_span_id type: remove if: '"span_id" in attributes' field: attributes.span_id batch: send_batch_size: 10000 send_batch_max_size: 11000 timeout: 10s signozspanmetrics/prometheus: metrics_exporter: prometheus latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ] dimensions_cache_size: 100000 dimensions: - name: service.namespace default: default - name: deployment.environment default: default # This is added to ensure the uniqueness of the timeseries # Otherwise, identical timeseries produced by multiple replicas of # collectors result in incorrect APM metrics - name: 'signoz.collector.id' # memory_limiter: # # 80% of maximum memory up to 2G # limit_mib: 1500 # # 25% of limit up to 2G # spike_limit_mib: 512 # check_interval: 5s # # # 50% of the maximum memory # limit_percentage: 50 # # 20% of max memory usage spike expected # spike_limit_percentage: 20 # queued_retry: # num_workers: 4 # queue_size: 100 # retry_on_failure: true resourcedetection: # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels. detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure. timeout: 2s extensions: health_check: endpoint: 0.0.0.0:13133 zpages: endpoint: 0.0.0.0:55679 pprof: endpoint: 0.0.0.0:1777 exporters: clickhousetraces: datasource: tcp://clickhouse:9000/?database=signoz_traces docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING} clickhousemetricswrite: endpoint: tcp://clickhouse:9000/?database=signoz_metrics resource_to_telemetry_conversion: enabled: true clickhousemetricswrite/prometheus: endpoint: tcp://clickhouse:9000/?database=signoz_metrics prometheus: endpoint: 0.0.0.0:8889 # logging: {} clickhouselogsexporter: dsn: tcp://clickhouse:9000/ docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER} timeout: 5s sending_queue: queue_size: 100 retry_on_failure: enabled: true initial_interval: 5s max_interval: 30s max_elapsed_time: 300s service: telemetry: metrics: address: 0.0.0.0:8888 extensions: - health_check - zpages - pprof pipelines: traces: receivers: [jaeger, otlp] processors: [signozspanmetrics/prometheus, batch] exporters: [clickhousetraces] metrics: receivers: [otlp] processors: [batch] exporters: [clickhousemetricswrite] metrics/generic: receivers: [hostmetrics] processors: [resourcedetection, batch] exporters: [clickhousemetricswrite] metrics/prometheus: receivers: [prometheus] processors: [batch] exporters: [clickhousemetricswrite/prometheus] metrics/spanmetrics: receivers: [otlp/spanmetrics] exporters: [prometheus] logs: receivers: [otlp, tcplog/docker] processors: [logstransform/internal, batch] exporters: [clickhouselogsexporter]
n
Copy code
receivers:
  filelog:
    include: [ /tmp/access.log, /tmp/error.log ]
    start_at: beginning

  tcplog/docker:
    listen_address: "0.0.0.0:2255"
    operators:
      - type: regex_parser
        regex: '^<([0-9]+)>[0-9]+ (?P<timestamp>[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}(\.[0-9]+)?([zZ]|([\+-])([01]\d|2[0-3]):?([0-5]\d)?)?) (?P<container_id>\S+) (?P<container_name>\S+) [0-9]+ - -( (?P<body>.*))?'
        timestamp:
          parse_from: attributes.timestamp
          layout: '%Y-%m-%dT%H:%M:%S.%LZ'
      - type: move
        from: attributes["body"]
        to: body
      - type: remove
        field: attributes.timestamp
        # please remove names from below if you want to collect logs from them
      - type: filter
        id: signoz_logs_filter
        expr: 'attributes.container_name matches "^signoz-(logspout|frontend|alertmanager|query-service|otel-collector|otel-collector-metrics|clickhouse|zookeeper)"'
  opencensus:
    endpoint: 0.0.0.0:55678
  otlp/spanmetrics:
    protocols:
      grpc:
        endpoint: localhost:12345
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
  jaeger:
    protocols:
      grpc:
        endpoint: 0.0.0.0:14250
      thrift_http:
        endpoint: 0.0.0.0:14268
      # thrift_compact:
      #   endpoint: 0.0.0.0:6831
      # thrift_binary:
      #   endpoint: 0.0.0.0:6832
  hostmetrics:
    collection_interval: 30s
    scrapers:
      cpu: {}
      load: {}
      memory: {}
      disk: {}
      filesystem: {}
      network: {}
  prometheus:
    config:
      global:
        scrape_interval: 60s
      scrape_configs:
        # otel-collector internal metrics
        - job_name: otel-collector
          static_configs:
          - targets:
              - localhost:8888
            labels:
              job_name: otel-collector
processors:
  logstransform/internal:
    operators:
      - type: trace_parser
        if: '"trace_id" in attributes or "span_id" in attributes'
        trace_id:
          parse_from: attributes.trace_id
        span_id:
          parse_from: attributes.span_id
        output: remove_trace_id
      - type: trace_parser
        if: '"traceId" in attributes or "spanId" in attributes'
        trace_id:
          parse_from: attributes.traceId
        span_id:
          parse_from: attributes.spanId
        output: remove_traceId
      - id: remove_traceId
        type: remove
        if: '"traceId" in attributes'
        field: attributes.traceId
        output: remove_spanId
      - id: remove_spanId
        type: remove
        if: '"spanId" in attributes'
        field: attributes.spanId
      - id: remove_trace_id
        type: remove
        if: '"trace_id" in attributes'
        field: attributes.trace_id
        output: remove_span_id
      - id: remove_span_id
        type: remove
        if: '"span_id" in attributes'
        field: attributes.span_id
  batch:
    send_batch_size: 10000
    send_batch_max_size: 11000
    timeout: 10s
  signozspanmetrics/prometheus:
    metrics_exporter: prometheus
    latency_histogram_buckets: [100us, 1ms, 2ms, 6ms, 10ms, 50ms, 100ms, 250ms, 500ms, 1000ms, 1400ms, 2000ms, 5s, 10s, 20s, 40s, 60s ]
    dimensions_cache_size: 100000
    dimensions:
      - name: service.namespace
        default: default
      - name: deployment.environment
        default: default
      # This is added to ensure the uniqueness of the timeseries
      # Otherwise, identical timeseries produced by multiple replicas of
      # collectors result in incorrect APM metrics
      - name: 'signoz.collector.id'
  # memory_limiter:
  #   # 80% of maximum memory up to 2G
  #   limit_mib: 1500
  #   # 25% of limit up to 2G
  #   spike_limit_mib: 512
  #   check_interval: 5s
  #
  #   # 50% of the maximum memory
  #   limit_percentage: 50
  #   # 20% of max memory usage spike expected
  #   spike_limit_percentage: 20
  # queued_retry:
  #   num_workers: 4
  #   queue_size: 100
  #   retry_on_failure: true
  resourcedetection:
    # Using OTEL_RESOURCE_ATTRIBUTES envvar, env detector adds custom labels.
    detectors: [env, system] # include ec2 for AWS, gcp for GCP and azure for Azure.
    timeout: 2s
extensions:
  health_check:
    endpoint: 0.0.0.0:13133
  zpages:
    endpoint: 0.0.0.0:55679
  pprof:
    endpoint: 0.0.0.0:1777
exporters:
  clickhousetraces:
    datasource: <tcp://clickhouse:9000/?database=signoz_traces>
    docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
    low_cardinal_exception_grouping: ${LOW_CARDINAL_EXCEPTION_GROUPING}
  clickhousemetricswrite:
    endpoint: <tcp://clickhouse:9000/?database=signoz_metrics>
    resource_to_telemetry_conversion:
      enabled: true
  clickhousemetricswrite/prometheus:
    endpoint: <tcp://clickhouse:9000/?database=signoz_metrics>
  prometheus:
    endpoint: 0.0.0.0:8889
  # logging: {}
  clickhouselogsexporter:
    dsn: <tcp://clickhouse:9000/>
    docker_multi_node_cluster: ${DOCKER_MULTI_NODE_CLUSTER}
    timeout: 5s
    sending_queue:
      queue_size: 100
    retry_on_failure:
      enabled: true
      initial_interval: 5s
      max_interval: 30s
      max_elapsed_time: 300s
service:
  telemetry:
    metrics:
      address: 0.0.0.0:8888
  extensions:
    - health_check
    - zpages
    - pprof
  pipelines:
    traces:
      receivers: [jaeger, otlp]
      processors: [signozspanmetrics/prometheus, batch]
      exporters: [clickhousetraces]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [clickhousemetricswrite]
    metrics/generic:
      receivers: [hostmetrics]
      processors: [resourcedetection, batch]
      exporters: [clickhousemetricswrite]
    metrics/prometheus:
      receivers: [prometheus]
      processors: [batch]
      exporters: [clickhousemetricswrite/prometheus]
    metrics/spanmetrics:
      receivers: [otlp/spanmetrics]
      exporters: [prometheus]
    logs:
      receivers: [otlp, filelog]
      processors: [logstransform/internal, batch]
      exporters: [clickhouselogsexporter]
Copy code
/Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log
mount the file
n
I am getting ready the mount the files now. Please give me the line # I will add it
Is this correct
I see this now how long it will take to for logs to show up in Signoz
Let me know if I need to move this in different location
Let me know what else I need to do now
I have not see any alerts show up yet
So now I need to this udo systemctl restart otelcol-contrib.service what is best command to restart otecol and where should it be run from . Thanks
Have see this error : * error decoding 'volumes[1]': invalid spec: /var/lib/docker/containers/var/lib/docker/containersro /Untiled/Users/nooramin-ali/Documents/Log file/access.log/tmp/access.log too many colons
I need to tweak this a little if you guide why I am getting this error now .It will help full I am very close now to get this POC moving. Thanks
I am getting this error when I try to run docker-compose restart
I need to please request one more meeting to make this work right for my POC I am getting close I am new to Signoz platform I don't have that much time left I want get this POC done by Thursday. Thanks
Well I had to redo it again to get logs showing up . Now I want get some input why it keeps doing.
I still don't now why it is not working correctly for me I need to make sure these location edit is correct please
This I really really want it to pull logs from and show it to me . Thanks
Well I had to deleted and reinstalled it looks good now
So fare so good I don't why it has given me so many issue in last few days on my mac
Please connect with me so I can get this going all I want to make sure after adding this part
Copy code
include: [ /tmp/access.log, /tmp/error.log ]
start_at: beginning and this now /Untiled/Users/nooramin-ali/Documents/Log file/access.log:/tmp/access.log. What else I need to do in order to log show up on front end please it is very important for this POC to become successful . Thanks
n
The mounting path doesn’t seem to be correct. Please add a
-
at the beginning as seen in other mounts and also reconfirm the path of the file once again. There might be some changes as thare are spaces in between your folder names
n
ok will do
-/Untiled/Users/nooramin-ali/Documents/Logfile/access.log:/tmp/access.log this look ok now
I still don't see log showing up on front end side
Let me know what else I need to do to make sure these logs show up
n
It’s looks okay, did you restart the signoz-otel-collector container ?
n
this is what it looks like now
I did run docker ps twice
what other commands can I run
n
please format it properly, there should be a space after -
docker container restart signoz-otel-collector
n
here you go
n
Here you mentioned that you have renamed the name of the folder and removed space, the above screenshot has a space https://signoz-community.slack.com/archives/C01HWQ1R0BC/p1692703731583299?thread_ts=1692222584.184529&amp;cid=C01HWQ1R0BC
n
you space between Log and file
I have removed the space now
run the command also now
where do I run this command signoz, deploy or docker docker container restart signoz-otel-collector
n
anywhere
n
I did
how long should it take to pull log file from the location of the path
Also what is best way find out if logs are showing up after collector gets restarted
n
Share the logs of signoz-otel-collector
n
what I mean Nitya is how do I find out here after I have restarted the collector few times.
n
@Noor Ali please share a private github repository with me, I will send you the setup which you can run directly
Also add the log files there
n
ok
I just added you here let me know if have access to it
n
Got it, please upload the log file there.
n
ok
I am getting this error Yowza, that’s a big file. Try again with a file smaller than 25MB.
my log file size is Log File - 741.9 MB
other size is Log File - 217.6 MB
so will it not work is the size of the file is too large
n
Can you get a trimmed down version of your log file ?
n
ok I will try
done for POC demo purpose and reason this will do
Thank you very much for your time and effort on it I am learning Signoz now in very short time
Please let me know if I need to do anything on my end
Hello Nitya my POC is on Thursday you think I have it done by that time. Thanks
Hello Nitya I hope and pray all is well with you and your team. Please let me what else I can do on my side for this POC . Thanks
n
Okay you want the
testaccess.rtf
logs in signoz right ?
n
Yes for this POC I just want to show how Signoz will work for log correlation
I have already created rest of docs for Signoz now all I want to do is live demo as POC with my team and our Sr Architect
n
I will send you the setup on github
n
ok
do you how long will it take someone to learn Signoz what are common commands to work on alerts, dashboard , traces, and logs. Does your company have docs like that or not? Just wondering that is all. Thanks
n
Check your repo, and follow the commands in readme.
n
do you mean hear
n
Yes please refresh
n
I did refresh my local host this http://192.168.1.229:3301/logs?q=&amp;order=desc
or do you me this one http://localhost:3301/
just got this error
Do you know why we the error
n
Please follow the instructions in the error message, you will have to make changes to your docker
n
ok I will do that thanks
Hello Nitya is never worked for me I am having a POC\ Demo on Signoz with my logs showing up in front end side . My POC\Demo will be on Friday. Mean while find out an answer why I was not able move logs from my . Right now I am going to touch anything till my Demo and POC is done. I will share you few screen shots with you so you can share with your team members . Thanks
Hello Nitya my POC and demo was done . I still need little help to work on second phase of the project . I made these chances now my front is working and staple . Tell me what else I need to do please. Let's schedule a call on Monday I want to make sure my logs are seen in front end /Untitled /Users/nooramin-ali/Documents/ Log File/Test Log . I would like to know why am I having this issues why can I not add logs in my own local host. http://192.168.1.229:3301/logs?q=&amp;order=desc. Thanks
I have made this changes I would like to test this out
I would like to make these changes here
In order to make this demo and poc is of Signoz to work our logs needs to be seen in the front end I need to trouble shoot to make it work. Thanks
My next POC and Demo to Next Friday I need to make sure Signoz can handle the logs if not POC will not work at all . So please get back with me on this. Thanks
I was checking something there was tilda symbol missing in volume for mounting it
please see this for me. Thanks
Which format is correct for Mounting Log File in the Volumes section please
Please respond to my questions per the screenshot earlier. Thanks
also what is the recommended size of the log should be I need to know that as well
So I made this change here in my Github I have run this command did not see the error any longer
But I am not able to see the log in the front end yet I am getting very close now.
Now only remains will be the size of the Log File access_log 741.9 Mb and error_log 217.6 MB is to large to feed into Signoz on same host like Mac
Now I have also reduced the size of my log on my local host also made few changes here
I have uploaded new files in Github this morning
Here it is please let me know if is going to work or not . I am testing logs in different ways to see if gets pulled or not. Now all I am waiting for his your Response . Thanks
Please respond today so I can move forward
http://192.168.1.229:3301/logs?q=&amp;order=desc I am using this to log into Signoz to view the logs just letting you know. Thanks
the other local host gives me this
I have uploaded sample files in different format too. Need to know which one will work better
I also run this command did not find any errors yet Nitya if any changes need to made on Github side please do so and let me know. All these new version of log files are small in size too. Thanks
Nitya please make changes on your side and let me know if I need to make changes on my side on my host MAC in order for Logs to be seen I need to get this POC is done by Friday if it works great if it does not work than you guys might not get the business. Since I figured out the mounting error on my own now I want to see if the logs can be seen . Thanks
after making few changes this error showed up 'Untitled/Users/Nooramin-ali/Documents/LogFile/TestLog/access_log' mount path must be absolute
Let me know if this is going to work or not
Hello Support or Nitya can you please guide me to move forward
These logs are saved on my local MAC.
So now I am stuck here again I don't know why any help can make this POC successful Error response from daemon: invalid mount config for type "volume": invalid mount path: 'Untitled/Users/Nooramin-ali/Documents/LogFile/TestLog/access_log' mount path must be absolute. Is this in Signos I need to know and understand to correct error please. Thanks
So it seems to the mounting path issue which I made it shorter and it worked run docker compose up it run without an error still running.
I got this far on local host IP address but now it say an invitation link needs to be sent
Please check my login profile on your end . Thanks
I just restarted my Mac laptop cleared the cache history on safari, and also cleared my DNS cache to now I am getting these error it is very odd I say. I will shutdown again and restart my laptop again . Thanks
I have no earthly idea why this has been happing to me either it is my Mac laptop or my wifi connection but know I am able log back into the Signoz
Please if any one can assist me why I am not able to see the logs from Mac Laptop it same host
I made sure it is added here also