• RE: We are looking for a React.js Developer

    bump bump

    posted in Front-End Development
  • RE: The third-party login system is not working on LankaDevelopers.lk.

    Apologies for the late reply. We have now resolved all the login issues with the third-party login system. If you have any further questions or need additional assistance, please let us know.

    posted in Comments & Feedback
  • RE: The third-party login system is not working on LankaDevelopers.lk.

    ohh thanks for the reporting, we'll solve asap;.

    posted in Comments & Feedback
  • RE: TALL STACK - Application optimize

    what you mean optimize ?

    posted in Laravel Framework
  • RE: Mora UXplore 1.0 - University of Moratuwa IEEE Student Branch

    Glad here that, thanks for the sharing ,

    posted in Events ( Free & Paid )
  • Mora UXplore 1.0 - University of Moratuwa IEEE Student Branch

    c516f658-5c2a-4806-aac1-71297db5b0fe-WhatsApp Image 2023-04-12 at 11.06.15.jpg

    " Even the greatest was once a beginner. Don't be afraid to take that first step "
    - Muhammad Ali

    Mora UXplore 1.0 is a UI/UX designing competition for undergraduates from any university. It aims to offer a platform for them to explore, learn, and enhance their skills in this field. 🀩✨


    Click the link below and take the first step to be the bestπŸ‘‡

    For more info join with our Facebook Event:

    Caption by- D.M.M. Shehan
    Design by- Dilina Raveen

    -Inspired by PASSION to Transform beyond EXCELLENCE-


    posted in Events ( Free & Paid )
  • RE: Join the Festivities! Win a Free T-Shirt by Becoming a Part of Lanka Developers Community this New Year!



    posted in Announcements
  • RE: Lanka Developers Community T-shirts


    you can place the order here

    posted in Announcements
  • LocalAI: A drop-in replacement for OpenAI


    LaMA, alpaca, gpt4all, vicuna, koala, gpt4all-j

    Self-hosted, community-driven simple local OpenAI-compatible API written in go. Can be used as a drop-in replacement for OpenAI, running on CPU with consumer-grade hardware. Supports ggml compatible models: LLaMA, alpaca, gpt4all, vicuna, koala, gpt4all-j

    Using LocalAI is straightforward and easy. You can simply install LocalAI on your local machine or server via docker and start performing inferencing tasks immediately, no more talkings let's start ,

    1. Install docker in your pc or server ( installation depend on the os type, check here)

    2. Open terminal or cmd and clone the LocalAi repo from github

        git clone https://github.com/go-skynet/LocalAI
    3. Go to the LocalAi/models folder in terminal

        cd LocalAi/models
    4. Download the model ( in here i use gpt4all-j model , this model coming with Apache 2.0 Licensed , it can be used for commercial purposes.)

       wget https://gpt4all.io/models/ggml-gpt4all-j.bin

      in here i use wget for download, you can download bin file manually and copy paste to the LocalAi/models folder

    5. Come back to the LocalAi root

    6. Start with docker-compose

         docker compose up -d --build

    After above process finished, let's call our LocalAi via terminal or cmd , in here i use curl you can use also any other tool can perform http request ( postman, etc.. )

    curl http://localhost:8080/v1/models

    This request showing what are the models we have added to the models directory

    Let's call the AI with actual Prompt.

    curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
         "model": "ggml-gpt4all-j",            
         "prompt": "Explain AI to me like A five-year-old",
         "temperature": 0.7

    Windows compatibility

    It should work, however you need to make sure you give enough resources to the container. See

    You can run the API in Kubernetes, see an example deployment in kubernetes

    API Support
    LocalAI provides an API for running text generation as a service, that follows the OpenAI reference and can be used as a drop-in. The models once loaded the first time will be kept in memory.

    Example of starting the API with docker:

    docker run -p 8080:8080 -ti --rm quay.io/go-skynet/local-api:latest --models-path /path/to/models --context-size 700 --threads 4

    Then you'll see:

    β”‚                   Fiber v2.42.0                   β”‚ 
    β”‚                    β”‚ 
    β”‚       (bound on host and port 8080)       β”‚ 
    β”‚                                                   β”‚ 
    β”‚ Handlers ............. 1  Processes ........... 1 β”‚ 
    β”‚ Prefork ....... Disabled  PID ................. 1 β”‚ 

    if you want more info about API, go to the github page

    posted in Artificial Intelligence
  • RE: Do anyone works with cardcom payment gateway?


    your welcome,

    do some reverse engineering for the laravel-cardcom. i think you can convert this to php package removing laravel facade. if you want i'll help you.

    posted in Web Development