TILs - Fueling Curiosity, One Insight at a Time

At Codemancers, we believe every day is an opportunity to grow. This section is where our team shares bite-sized discoveries, technical breakthroughs and fascinating nuggets of wisdom we've stumbled upon in our work.

Published
Author
user-image
Nisanth
The docker build -f command is used to specify the name of the Dockerfile to use for building a Docker image. This flag allows you to specify the path to the Dockerfile that you want to use for building the image. However, if you have a Dockerfile with a different name or located in a different directory, you can use the -f flag followed by the path to the Dockerfile.
Published
Author
user-image
Soniya Rayabagi
Storing Terraform State in S3 with Encryption and DynamoDB Locking , instead of Storing locally :
To improve security , you can configure Terraform to store its state file in an AWS S3 bucket with encryption and use DynamoDB for state locking. This setup prevents local state file risks, enables team access, and avoids state conflicts.

Code

syntax : 

 terraform {
      backend "<BACKEND_NAME>" {
        [CONFIG...]
      }
}



Code

example :

terraform {
  backend "s3" {
    bucket         = "my-terraform-state"
    key            = "state/production/terraform.tfstate"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "my-terraform-lock"
  }
}

Published
Author
user-image
Mahesh Bhosle
DevOps Engineer
act utility can be used to simulate GitHub Actions locally, for testing the GitHub workflows.
To install act on Mac: brew install act
To use the act navigate to the GitHub repo and use act -l to list all jobs
To run any particular job: act -j job_id
To run a job on any particular platform: act -j job_id -P

[To run the jobs locally, docker must be running on the system.]
Published
Author
user-image
Soniya Rayabagi
square brackets [ ] indicate that the entire expression within them is optional
example : FROM [--platform=<platform>] <image> [AS <name>]
here , you can choose whether or not to include the --platform=<platform>
Published
Author
user-image
Satya
When ActiveJob is set to :inline it does not depends upon a scheduler anymore. Use perform_later as per your requirements and that will perform the job instantly.
Published
Author
user-image
Sachin Kabadi
System Analyst
If you face error as " Sprockets::Rails::Helper::AssetNotFound: The asset "tailwind.css" is not present in the asset pipeline " while running CI/CD pipeline or during deployment, then check whether you have add the the "app/assets/builds/*" in gitignore. Remove it and precompile assets again using "rake assets:precompile" or "rails assets:precompile". Make sure "tailwind.css" file exists in "app/assets/builds/" folder.
Published
Author
user-image
Satya
while running psql postgres if we face the below error:
psql: error: connection to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or directory
That can be because of 2 reasons.
1. Your service isn't running , that can be fixed using brew services start postgresql@15
2. Your service is running , but there is some error like

Code

Bootstrap failed: 5: Input/output error
Try re-running the command as root for richer errors.
Error: Failure while executing; `/bin/launchctl bootstrap gui/501 /Users/<user>/Library/LaunchAgents/homebrew.mxcl.postgresql@15.plist` exited with 5.


We can fix the 2nd error by removing the process pid file.
• Step 1 -> brew services stop postgresql@15
• Step 2 -> rm -f /opt/homebrew/var/postgresql@15/postmaster.pid (apple sillicon install it at opt/homebrew)
• Step 3 -> brew services start postgresql@15
These 3 steps should fix the error and we can check the info by running brew services info postgresql@15
Published
Author
user-image
Sachin Kabadi
System Analyst
While testing request spec in rails, configure host in your environment(development/test) :-
Add below line in your "config/environment/test" file.

Ruby

config.hosts << "www.example.com"

Published
Author
user-image
Soniya Rayabagi
CIDR Block(Classless Inter-Domain Routing)
Example : CIDR block: 192.168.1.0/28
This CIDR block represents all IP addresses between 192.168.1.0 and 192.168.1.15. The "/28" notation indicates that the first 28 bits of the IP address are fixed (192.168.1.0), and the remaining 4 bits (from 0 to 15) can vary, resulting in 16 possible IP addresses in the range. This range allows for 16 IP addresses, from 192.168.1.0 to 192.168.1.15.
Published
Author
user-image
Sujay
pg_dump commands:
Standard pg_dump without DROP table queries:
pg_dump -h your-hostname -p your-port -U your-username -d your-database-name -f output-file.sql`` pg_dump with DROP table queries (clean dump): pg_dump -h your-hostname -p your-port -U your-username -d your-database-name --clean -f output-file.sql pg_dump for schema only: pg_dump -h your-hostname -p your-port -U your-username -d your-database-name --schema-only -f output-file.sql pg_dump for data only: pg_dump -h your-hostname -p your-port -U your-username -d your-database-name --data-only -f output-file.sql pg_dump for data only (no schema, INSERT commands only): pg_dump -h your-hostname -p your-port -U your-username -d your-database-name --data-only --inserts -f output-file.sql`

Showing page 32 of 83

Your competitors are already using AI.
The question is how fast you want to unlock the value.

Don't know where to start?

AI is everywhere but it's unclear which investments will actually move your metrics and which are expensive experiments.

Your data isn't ready

Most AI projects fail at the data layer. Pipelines, quality, access all need work before LLMs can deliver value.

Internal teams are stretched

Your engineers are shipping product. They don't have capacity to also become AI specialists with production-grade experience.

Legacy systems block everything

Aging, undocumented codebases make AI integration slow, risky, and expensive. They need to move first.

Don't worry. We've got you covered.

Start with the audit.