How can I restart a Java application? - Stack Overflow

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

./play.it 2.12: API, GUI and video games

./play.it 2.12: API, GUI and video games

./play.it is a free/libre software that builds native packages for several Linux distributions from DRM-free installers for a collection of commercial games. These packages can then be installed using the standard distribution-provided tools (APT, pacman, emerge, etc.).
A more complete description of ./play.it has already been posted in linux_gaming a couple months ago: ./play.it, an easy way to install commercial games on GNU/Linux
It's already been one year since version 2.11 was released, in January 2019. We will only briefly review the changelog of version 2.12 and focus on the different points of ./play.it that kept us busy during all this time, and of which coding was only a small part.

What’s new with 2.12?

Though not the focus of this article, it would be a pity not to present all the added features of this brand new version. ;)
Compared to the usual updates, 2.12 is a major one, especially since for two years, we slowed down the addition of new features. Some patches took dust since the end of 2018 before finally be integrated in this update!
The list of changes for this 2.12 release can be found on our forge. Here is a full copy for convenience:

Development migration

History

As many free/libre projects, ./play.it development started on some random sector of a creaking hard drive, and unsurprisingly, a whole part of its history (everything predating version 1.13.15 released on Mars 30th, 2016) disappeared into the limbs because some unwise operation destroyed the only copy of the repository… Lesson learned, what's not shared don't stay long, and so was born the first public Git repository of the project. The easing of collaborative work was only accidentally achieved by this quest for eternity, but wasn't the original motivation for making the repository publicly available.
Following this decision, ./play.it source code has been hosted successively by many shared forge platforms:

Dedicated forge

As development progressed, ./play.it began to increase its need for resources, dividing its code into several repositories to improve the workflow of the different aspects of the projects, adding continuous integration tests and their constraints, etc. A furious desire to understand the nooks and crannies behind a forge platform was the last deciding factor towards hosting a dedicated forge.
So it happened, we deployed a forge platform on a dedicated server, hugely benefiting from the tremendous work achieved by the GitLab's package Debian Maintainers team. In return, we tried to contribute our findings in improving this software packaging.
That was not expected, but this migration happened just a little time before the announcement “Déframasoftisons Internet !” (French article) about the planned end of Framagit.
This dedicated instance used to be hosted on a VPS rented from Digital Ocean until the second half of July 2020, and since then has been moved to another VPS, rented from Hetzner. The specifications are similar, as well as the service, but thanks to this migration our hosting costs have been cut in half. Keeping in mind that this is paid by a single person, so any little donation helps a lot on this front. ;)
To the surprise of our system administrator, this last migration took only a couple hours with no service interruption reported by our users.

Forge access

This new forge can be found at forge.dotslashplay.it. Registrations are open to the public, but we ask you to not abuse this, the main restriction being that we do not wish to host projects unrelated to ./play.it. Of course exceptions are made for our active contributors, who are allowed to host some personal projects there.
So, if you wish to use this forge to host your own work, you first need to make some significant contributions to ./play.it.

API

The collection of supported games growing endlessly, we have started the development of a public API allowing access to lots of information related to ./play.it.
This API, which is not yet stabilized, is simply an interface to a versioned database containing all the ./play.it scripts, handled archives, games installable through the project. Relations are, of course, handled between those items, enabling its use for requests like : « What packages are required on my system to install Cæsar Ⅲ ? » or « What are the free (as in beer) games handled via DOSBox ? ».
Originally developed as support for the new, in-development, Web site (we'll talk about it later on), this API should facilitate the development of tools around ./play.it. For example, it'll be useful for whomever would like to build a complete video game handling software (downloading, installation, starting, etc.) using ./play.it as one of its building bricks.
For those curious about the technical side, it's an API based on Lumeneffectuant that makes requests on a MariaDB database, all self-hosted on a Debian Sid. Not only is the code of the API versioned on our forge, but also the structure and content of the databases, which will allow those who desired it to install a local version easily.

New website

Based on the aforementioned API, a new website is under development and will replace our current website based on DokuWiki.
Indeed, if the lack of database and the plain text files structure of DokuWiki seemed at first attractive, as ./play.it supported only a handful of games (link in French), this feature became more inconvenient as the library of ./play.it supported games grew.
We shall make an in-depth presentation of this website for the 2.13 release of ./play.it, but a public demo of the development version from our forge is already available.
If you feel like providing an helping hand on this task, some priority tasks have been identified to allow opening a new Web site able to replace the current one. And for those interested in technical details, this web Site was developed in PHP using the framework Laravel. The current in-development version is hosted for now on the same Debian Sid than the API.

GUI

A regular comment that is done about the project is that, if the purpose is to make installing games accessible to everyone without technical skills, having to run scripts in the terminal remains somewhat intimidating. Our answer until now has been that while the project itself doesn't aim to providing a graphical interface (KISS principle "Keep it simple, stupid"), still and always), but that it would be relatively easy to, later on, develop a graphical front-end to it.
Well, it happens that is now reality. Around the time of our latest publication, one of our contributors, using the API we just talked about, developed a small prototype that is usable enough to warrant a little shout out. :-)
In practice, it is some small Python 3 code (an HCI completely in POSIX shell is for a later date :-°), using GTK 3 (and still a VTE terminal to display the commands issued, but the user shouldn't have to input anything in it, except perhaps the root password to install some packages). This allowed to verify that, as we used to say, it would be relatively easy, since a script of less than 500 lines of code (written quickly over a week-end) was enough to do the job !
Of course, this graphical interface project stays independent from the main project, and is maintained in a specific repository. It seems interesting to us to promote it in order to ease the use of ./play.it, but this doesn't prevent any other similar projects to be born, for example using a different language or graphical toolkit (we, globally, don't have any particular affinity towards Python or GTK).
The use of this HCI needs three steps : first, a list of available games is displayed, coming directly from our API. You just need to select in the list (optionally using the search bar) the game you want to install. Then it switches to a second display, which list the required files. If several alternatives are available, the user can select the one he wants to use. All those files must be in the same directory, the address bar on the top enabling to select which one to use (click on the open button on the top opens a filesystem navigation window). Once all those files available (if they can be downloaded, the software will do it automatically), you can move ahead to the third step, which is just watching ./play.it do its job :-) Once done, a simple click on the button on the bottom will run the game (even if, from this step, the game is fully integrated on your system as usual, you no longer need this tool to run it).
To download potentially missing files, the HCI will use, depending on what's available on the system, either wget, curl or aria2c (this last one also handling torrents), of which the output will be displayed in the terminal of the third phase, just before running the scripts. For privilege escalation to install packages, sudo will be used preferentially if available (with the option to use a third-party application for password input, if the corresponding environment variable is set, which is more user-friendly), else su will be used.
Of course, any suggestion for an improvement will be received with pleasure.

New games

Of course, such an announcement would not be complete without a list of the games that got added to our collection since the 2.11 release… So here you go:
If your favourite game is not supported by ./play.it yet, you should ask for it in the dedicated tracker on our forge. The only requirement to be a valid request is that there exists a version of the game that is not burdened by DRM.

What’s next?

Our team being inexhaustible, work on the future 2.13 version has already begun…
A few major objectives of this next version are :
If your desired features aren't on this list, don't hesitate to signal it us, in the comments of this news release. ;)

Links

submitted by vv224 to linux_gaming [link] [comments]

Gridcoin 5.0.0.0-Mandatory "Fern" Release

https://github.com/gridcoin-community/Gridcoin-Research/releases/tag/5.0.0.0
Finally! After over ten months of development and testing, "Fern" has arrived! This is a whopper. 240 pull requests merged. Essentially a complete rewrite that was started with the scraper (the "neural net" rewrite) in "Denise" has now been completed. Practically the ENTIRE Gridcoin specific codebase resting on top of the vanilla Bitcoin/Peercoin/Blackcoin vanilla PoS code has been rewritten. This removes the team requirement at last (see below), although there are many other important improvements besides that.
Fern was a monumental undertaking. We had to encode all of the old rules active for the v10 block protocol in new code and ensure that the new code was 100% compatible. This had to be done in such a way as to clear out all of the old spaghetti and ring-fence it with tightly controlled class implementations. We then wrote an entirely new, simplified ruleset for research rewards and reengineered contracts (which includes beacon management, polls, and voting) using properly classed code. The fundamentals of Gridcoin with this release are now on a very sound and maintainable footing, and the developers believe the codebase as updated here will serve as the fundamental basis for Gridcoin's future roadmap.
We have been testing this for MONTHS on testnet in various stages. The v10 (legacy) compatibility code has been running on testnet continuously as it was developed to ensure compatibility with existing nodes. During the last few months, we have done two private testnet forks and then the full public testnet testing for v11 code (the new protocol which is what Fern implements). The developers have also been running non-staking "sentinel" nodes on mainnet with this code to verify that the consensus rules are problem-free for the legacy compatibility code on the broader mainnet. We believe this amount of testing is going to result in a smooth rollout.
Given the amount of changes in Fern, I am presenting TWO changelogs below. One is high level, which summarizes the most significant changes in the protocol. The second changelog is the detailed one in the usual format, and gives you an inkling of the size of this release.

Highlights

Protocol

Note that the protocol changes will not become active until we cross the hard-fork transition height to v11, which has been set at 2053000. Given current average block spacing, this should happen around October 4, about one month from now.
Note that to get all of the beacons in the network on the new protocol, we are requiring ALL beacons to be validated. A two week (14 day) grace period is provided by the code, starting at the time of the transition height, for people currently holding a beacon to validate the beacon and prevent it from expiring. That means that EVERY CRUNCHER must advertise and validate their beacon AFTER the v11 transition (around Oct 4th) and BEFORE October 18th (or more precisely, 14 days from the actual date of the v11 transition). If you do not advertise and validate your beacon by this time, your beacon will expire and you will stop earning research rewards until you advertise and validate a new beacon. This process has been made much easier by a brand new beacon "wizard" that helps manage beacon advertisements and renewals. Once a beacon has been validated and is a v11 protocol beacon, the normal 180 day expiration rules apply. Note, however, that the 180 day expiration on research rewards has been removed with the Fern update. This means that while your beacon might expire after 180 days, your earned research rewards will be retained and can be claimed by advertising a beacon with the same CPID and going through the validation process again. In other words, you do not lose any earned research rewards if you do not stake a block within 180 days and keep your beacon up-to-date.
The transition height is also when the team requirement will be relaxed for the network.

GUI

Besides the beacon wizard, there are a number of improvements to the GUI, including new UI transaction types (and icons) for staking the superblock, sidestake sends, beacon advertisement, voting, poll creation, and transactions with a message. The main screen has been revamped with a better summary section, and better status icons. Several changes under the hood have improved GUI performance. And finally, the diagnostics have been revamped.

Blockchain

The wallet sync speed has been DRASTICALLY improved. A decent machine with a good network connection should be able to sync the entire mainnet blockchain in less than 4 hours. A fast machine with a really fast network connection and a good SSD can do it in about 2.5 hours. One of our goals was to reduce or eliminate the reliance on snapshots for mainnet, and I think we have accomplished that goal with the new sync speed. We have also streamlined the in-memory structures for the blockchain which shaves some memory use.
There are so many goodies here it is hard to summarize them all.
I would like to thank all of the contributors to this release, but especially thank @cyrossignol, whose incredible contributions formed the backbone of this release. I would also like to pay special thanks to @barton2526, @caraka, and @Quezacoatl1, who tirelessly helped during the testing and polishing phase on testnet with testing and repeated builds for all architectures.
The developers are proud to present this release to the community and we believe this represents the starting point for a true renaissance for Gridcoin!

Summary Changelog

Accrual

Changed

Most significantly, nodes calculate research rewards directly from the magnitudes in EACH superblock between stakes instead of using a two- or three- point average based on a CPID's current magnitude and the magnitude for the CPID when it last staked. For those long-timers in the community, this has been referred to as "Superblock Windows," and was first done in proof-of-concept form by @denravonska.

Removed

Beacons

Added

Changed

Removed

Unaltered

As a reminder:

Superblocks

Added

Changed

Removed

Voting

Added

Changed

Removed

Detailed Changelog

[5.0.0.0] 2020-09-03, mandatory, "Fern"

Added

Changed

Removed

Fixed

submitted by jamescowens to gridcoin [link] [comments]

CLI & GUI v0.16.0.3 'Nitrogen Nebula' released!

This is the CLI & GUI v0.16.0.3 'Nitrogen Nebula' point release. This release predominantly features bug fixes and performance improvements.

(Direct) download links (GUI)

(Direct) download links (CLI)

GPG signed hashes

We encourage users to check the integrity of the binaries and verify that they were signed by binaryFate's GPG key. A guide that walks you through this process can be found here for Windows and here for Linux and Mac OS X.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 # This GPG-signed message exists to confirm the SHA256 sums of Monero binaries. # # Please verify the signature against the key for binaryFate in the # source code repository (/utils/gpg_keys). # # ## CLI 75b198869a3a117b13b9a77b700afe5cee54fd86244e56cb59151d545adbbdfd monero-android-armv7-v0.16.0.3.tar.bz2 b48918a167b0961cdca524fad5117247239d7e21a047dac4fc863253510ccea1 monero-android-armv8-v0.16.0.3.tar.bz2 727a1b23fbf517bf2f1878f582b3f5ae5c35681fcd37bb2560f2e8ea204196f3 monero-freebsd-x64-v0.16.0.3.tar.bz2 6df98716bb251257c3aab3cf1ab2a0e5b958ecf25dcf2e058498783a20a84988 monero-linux-armv7-v0.16.0.3.tar.bz2 6849446764e2a8528d172246c6b385495ac60fffc8d73b44b05b796d5724a926 monero-linux-armv8-v0.16.0.3.tar.bz2 cb67ad0bec9a342b0f0be3f1fdb4a2c8d57a914be25fc62ad432494779448cc3 monero-linux-x64-v0.16.0.3.tar.bz2 49aa85bb59336db2de357800bc796e9b7d94224d9c3ebbcd205a8eb2f49c3f79 monero-linux-x86-v0.16.0.3.tar.bz2 16a5b7d8dcdaff7d760c14e8563dd9220b2e0499c6d0d88b3e6493601f24660d monero-mac-x64-v0.16.0.3.tar.bz2 5d52712827d29440d53d521852c6af179872c5719d05fa8551503d124dec1f48 monero-win-x64-v0.16.0.3.zip ff094c5191b0253a557be5d6683fd99e1146bf4bcb99dc8824bd9a64f9293104 monero-win-x86-v0.16.0.3.zip # ## GUI 50fe1d2dae31deb1ee542a5c2165fc6d6c04b9a13bcafde8a75f23f23671d484 monero-gui-install-win-x64-v0.16.0.3.exe 20c03ddb1c82e1bcb73339ef22f409e5850a54042005c6e97e42400f56ab2505 monero-gui-linux-x64-v0.16.0.3.tar.bz2 574a84148ee6af7119fda6b9e2859e8e9028fe8a8eec4dfdd196aeade47e9c90 monero-gui-mac-x64-v0.16.0.3.dmg 371cb4de2c9ccb5ed99b2622068b6aeea5bdfc7b9805340ea7eb92e7c17f2478 monero-gui-win-x64-v0.16.0.3.zip # # # ~binaryFate -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEEgaxZH+nEtlxYBq/D8K9NRioL35IFAl81bL8ACgkQ8K9NRioL 35J+UA//bgY6Mhikh8Cji8i2bmGXEmGvvWMAHJiAtAG2lgW3BT9BHAFMfEpUP5rk svFNsUY/Uurtzxwc/myTPWLzvXVMHzaWJ/EMKV9/C3xrDzQxRnl/+HRS38aT/D+N gaDjchCfk05NHRIOWkO3+2Erpn3gYZ/VVacMo3KnXnQuMXvAkmT5vB7/3BoosOU+ B1Jg5vPZFCXyZmPiMQ/852Gxl5FWi0+zDptW0jrywaS471L8/ZnIzwfdLKgMO49p Fek1WUUy9emnnv66oITYOclOKoC8IjeL4E1UHSdTnmysYK0If0thq5w7wIkElDaV avtDlwqp+vtiwm2svXZ08rqakmvPw+uqlYKDSlH5lY9g0STl8v4F3/aIvvKs0bLr My2F6q9QeUnCZWgtkUKsBy3WhqJsJ7hhyYd+y+sBFIQH3UVNv5k8XqMIXKsrVgmn lRSolLmb1pivCEohIRXl4SgY9yzRnJT1OYHwgsNmEC5T9f019QjVPsDlGNwjqgqB S+Theb+pQzjOhqBziBkRUJqJbQTezHoMIq0xTn9j4VsvRObYNtkuuBQJv1wPRW72 SPJ53BLS3WkeKycbJw3TO9r4BQDPoKetYTE6JctRaG3pSG9VC4pcs2vrXRWmLhVX QUb0V9Kwl9unD5lnN17dXbaU3x9Dc2pF62ZAExgNYfuCV/pTJmc= =bbBm -----END PGP SIGNATURE----- 

Upgrading (GUI)

Note that you should be able to utilize the automatic updater in the GUI that was recently added. A pop-up will appear with the new binary.
In case you want to update manually, you ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the direct download links in this thread or from the official website. If you run active AV (AntiVirus) software, I'd recommend to apply this guide -> https://monero.stackexchange.com/questions/10798/my-antivirus-av-software-blocks-quarantines-the-monero-gui-wallet-is-there
  2. Extract the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux) you just downloaded) to a new directory / folder of your liking.
  3. Open monero-wallet-gui. It should automatically load your "old" wallet.
If, for some reason, the GUI doesn't automatically load your old wallet, you can open it as follows:
[1] On the second page of the wizard (first page is language selection) choose Open a wallet from file
[2] Now select your initial / original wallet. Note that, by default, the wallet files are located in Documents\Monero\ (Windows), Users//Monero/ (Mac OS X), or home//Monero/ (Linux).
Lastly, note that a blockchain resync is not needed, i.e., it will simply pick up where it left off.

Upgrading (CLI)

You ought to perform the following steps:
  1. Download the new binaries (the .zip file (Windows) or the tar.bz2 file (Mac OS X and Linux)) from the official website, the direct download links in this thread, or Github.
  2. Extract the new binaries to a new directory of your liking.
  3. Copy over the wallet files from the old directory (i.e. the v0.15.x.x or v0.16.0.x directory).
  4. Start monerod and monero-wallet-cli (in case you have to use your wallet).
Note that a blockchain resync is not needed. Thus, if you open monerod-v0.16.0.3, it will simply pick up where it left off.

Release notes (GUI)

  • macOS app is now notarized by Apple
  • CMake improvments
  • Add support for IPv6 remote nodes
  • Add command history to Logs page
  • Add "Donate to Monero" button
  • Indicate probability of finding a block on Mining page
  • Minor bug fixes
Note that you can find a full change log here.

Release notes (CLI)

  • DoS fixes
  • Add option to print daily coin emission and fees in monero-blockchain-stats
  • Minor bug fixes
Note that you can find a full change log here.

Further remarks

  • A guide on pruning can be found here.
  • Ledger Monero users, please be aware that version 1.6.0 of the Ledger Monero App is required in order to properly use CLI or GUI v0.16.

Guides on how to get started (GUI)

https://github.com/monero-ecosystem/monero-GUI-guide/blob/mastemonero-GUI-guide.md
Older guides: (These were written for older versions, but are still somewhat applicable)
Sheep’s Noob guide to Monero GUI in Tails
https://medium.com/@Electricsheep56/the-monero-gui-wallet-broken-down-in-plain-english-bd2889b8c202

Ledger GUI guides:

How do I generate a Ledger Monero wallet with the GUI (monero-wallet-gui)?
How do I restore / recreate my Ledger Monero wallet?

Trezor GUI guides:

How do I generate a Trezor Monero wallet with the GUI (monero-wallet-gui)?
How to use Monero with Trezor - by Trezor
How do I restore / recreate my Trezor Monero wallet?

Ledger & Trezor CLI guides

Guides to resolve common issues (GUI)

My antivirus (AV) software blocks / quarantines the Monero GUI wallet, is there a work around I can utilize?
I am missing (not seeing) a transaction to (in) the GUI (zero balance)
Transaction stuck as “pending” in the GUI
How do I move the blockchain (data.mdb) to a different directory during (or after) the initial sync without losing the progress?
I am using the GUI and my daemon doesn't start anymore
My GUI feels buggy / freezes all the time
The GUI uses all my bandwidth and I can't browse anymore or use another application that requires internet connection
How do I change the language of the 25 word mnemonic seed in the GUI or CLI?
I am using remote node, but the GUI still syncs blockchain?

Using the GUI with a remote node

In the wizard, you can either select Simple mode or Simple mode (bootstrap) to utilize this functionality. Note that the GUI developers / contributors recommend to use Simple mode (bootstrap) as this mode will eventually use your own (local) node, thereby contributing to the strength and decentralization of the network. Lastly, if you manually want to set a remote node, you ought to use Advanced mode. A guide can be found here:
https://www.getmonero.org/resources/user-guides/remote_node_gui.html

Adding a new language to the GUI

https://github.com/monero-ecosystem/monero-translations/blob/masteweblate.md
If, after reading all these guides, you still require help, please post your issue in this thread and describe it in as much detail as possible. Also, feel free to post any other guides that could help people.
submitted by dEBRUYNE_1 to Monero [link] [comments]

Adding cover artwork to CDI disc images for GDEMU/GDMENU

A question came up from u/pvcHook in a recent post about adding artwork to GDI images: can the same be done for games in a CDI format? The answer is yes, and the general process is the same as it is for the GDI games. I've already added all of the appropriate artwork to all of the indie shmup games and all that; can I share those here, or is that a no-no? Because if that's all you're here for it, that would be a lot easier than putting yourself through this process. But it's something to learn, so read on.
First, if you want to do this, you're going to need the proper tools. Someone put together a CDI toolkit (password: DCSTUFF) of sorts on another forum; this is basically the same thing with a few additions and tweaks I've made; before you begin install ISO Buster from the 'isobuster' folder. You will also need the PVR Viewer utility to create the artwork files for the discs. The images you generate will need to be mounted to a virtual drive, so Daemon Tools or some other drive emulation software will also be required. And finally you'll need a copy of DiscJuggler to write your images into a format useable by an emulator or your GDEMU.
EXTRACTION
Here are the general extraction steps, I'll go into a bit more detail after the list:
  1. Copy your CDI image to the 'cdirip' folder in the toolkit and run the 'CDIrip pause.bat' file. Choose an output directory (preferably the 'isofix' folder) and let it rip. You will need to note the LBA info of the tracks being extracted (which is why I made this pause batch file). If only two tracks are extracted, then look closely at the sizes of the sectors that were extracted. If the first track is the larger of the two, then you will not need to use isofix to extract the contents. If the second track is the larger of the two, make note of its LBA value to use with isofix to extract its contents.
  2. Make sure you have installed ISO Buster, you will need it beyond this point.
  3. Go to the 'isofix' folder and you will see the contents of the disc. There will be image files named with the 'TData#.iso' convention and those are what we need to use. The steps diverge a bit from this point depending upon the format of the disc you just extracted; read carefully and follow the instructions for your situation.
  4. If the first track extracted in step one was the larger of the two tracks, open it in ISO Buster and go to step #7.
  5. If the second track extracted in step one was the larger of the two tracks, open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  6. If CDIrip extracted a bunch of wave files and a 'TData#.iso' file, the disc you extracted uses CDDA. Open a command prompt in 'isofix' (shift+right click) and type "isofix.exe TData#.iso" and give the utility the LBA you noted in step 1 when prompted for it. This will dump a new iso file into the folder called 'fixed.iso'. Open 'fixed.iso' in ISO Buster and go to step #7.
  7. In the left pane of ISO Buster you'll see the file structure of the iso file you opened; expand the tree until you see a red 'iso' icon and click on it. This should open up the files and folders within it in the right pane. Highlight all of these files, right click and choose 'Extract Objects'; choose the 'discroot' folder in the CDI toolkit.
Your CDI image is now extracted. Please note that all of the indie releases from NGDEV.TEAM, Hucast.Net, and Duranik use the CDDA format. You'll see the difference when it's time to rebuild the disc image. Also, if you're using PowerShell and not command prompt, the prompts to run the command line utilities are a bit different; you would need to type out '.\isofix' (minus quotes) to execute isofix, for example.
COVER ART CREATION
There are other guides out there concerned with converting cover art files into the PVR format that the Dreamcast and GDEMU/GDMenu use, so I won't go into great detail about that here. I will note, however, that I generally load games up in Redream at least once so it fetches the cover art for the games. They are very good quality sources, and they're 512x512 so won't lose any quality when you reduce them to 256x256 for the GDMenu.
I will say, however, that a lot of the process in the guide I linked to is optional; you can simply open the source file in PVR Viewer and save it as a .pvr file and it will be fine. But feel free to get as detailed as you like with it.
REBUILDING
Once you have your cover art to your liking, make sure it's been placed in the 'discroot' folder and you can begin the image rebuilding process.
We'll start with an image that doesn't use CDDA:
  1. Check the 'discroot' folder for two files: 1ST_READ.BIN and IP.BIN. Select them, then copy and paste them into the 'binhack32' folder in the toolkit. Run the binhack32.exe application in the 'binhack32' folder (you may have to tweak your antivirus settings to do this).
  2. Binhack32 will prompt you to "enter name of binary": this is 1ST_READ.BIN, type it correctly and remember it is case sensitive. Once you enter the binary, you will be prompted to "enter name of bootsector": this is IP.BIN, again type correctly and remember case.
  3. The next prompt will ask you to update the LBA value of the binaries. Enter zero ( 0 ) for this value, since we are removing the preceding audio session track and telling the binaries to start from the beginning of the disc. Once the utility is done, select the two bin files, then cut and paste them back into the 'discroot' folder; overwrite when prompted.
  4. Open the 'bootdreams' folder and start up the BootDreams.exe executable. Before doing anything click on the "Extras" entry in the menu bar, and hover over "Dummy file"; some options will pop out. If you are burning off the discs for any reason, be sure to use one of the options, 650MB or 700MB. If you aren't burning them, still consider using the dummy data. It will compress down to nothing if you're saving these disc images for archival reasons.
  5. Click on the far left icon on the top of BootDreams, the green DiscJuggler icon. Open or drag'n'drop the 'discroot' folder into the "selfboot folder" field, and add whatever label you want for the disc (limited to 8 characters, otherwise you'll get an error). Change disc format to 'data/data', then click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. Choose an output location for the CDI image, and let the utilities go to work. If everything was set up properly you'll get a new disc image with cover art. I always boot the CDI up in RetroArch or another emulator to make sure it's valid and runs as expected so you don't waste time transferring a bad dump to your GDEMU (or burning a bad disc).
If your game uses CDDA, the process involves a few more steps, but it's nothing terribly complicated:
  1. Check the 'discroot' folder for the IP.BIN file. If it's there, everything is good, continue on to the next step. If it's not there, look in the 'isofix' directory: there should be a file called "bootsector.bin" in that folder. Copy that file and paste it into the 'discroot' folder, then rename it IP.BIN (all caps, even the file extension). Now you're good, go on to the next step.
  2. Remember all those files dumped into the 'isofix' directory? Go look at them now. Copy/cut and paste all of those wave files from 'isofix' into the 'bootdreams/cdda' folder.
  3. Start up the bootdreams.exe executable from the 'bootdreams' folder.
  4. Select the middle icon at the top of the BootDreams window, the big red 'A' for Alcohol 120% image. Once you've selected this, click on 'Extras' up in the menu bar and make sure the 'Add CDDA tracks' option is selected (has a check mark next to it).
  5. Open/drag'n'drop the finished 'discroot' folder into the selfboot folder field; put whatever name you'd like for the disc in the CD label field. Click on the process button.
  6. If you get a prompt asking to scramble the binary, say no. Retail games that run off of Katana or Windows CE binaries don't need to be scrambled; if this is a true homebrew application or game, then it might need to be scrambled.
  7. A window showing you the audio files in the 'cdda' folder will pop up. Highlight all of them in the left pane and click the right-pointing arrow in the middle of the two fields to add them to the project. Make sure they are in order! Then click on OK. The audio files are converted to the appropriate raw format and the process continues. Choose an output location for the MDS/MDF files.
  8. When the files are finished, find them and mount them into a virtual drive (with Daemon Tools or whatever utility you prefer). Open up DiscJuggler and we'll make a CDI image.
  9. Start a new project in DiscJuggler (File > New, then choose 'Create disc images' from the menu). Choose your virtual drive with mounted image in the source field, and set your file output in the destination field. Click the Advanced tab above, and make sure 'Overburn disc' is selected. Click Start to begin converting into a CDI image.
  10. When DiscJuggler is done, close it down, unmount and delete the MDS/MDF files created by BootDreams, and test your CDI image with RetroArch or another emulator before transferring it to your GDEMU.
If you have followed these steps and the disc image will absolutely not boot, then it's possible that a certain disc layout is required and must be used. I have only run into this a few times, but in this situation you simply need to use the 'audio/data' option for the CDI image in Bootdreams to put the image back together. Please note: if you are going to try to build the image with the 'audio/data' option, then make sure you replace the IP.BIN file in the 'discroot' folder with the original, unmodified bootsector.bin file in the 'isofix' folder. The leading audio track is a set size, and the IP.BIN will be expecting this; remember, the IP.BIN modified by binhack32 changes the LBA value of the file and it won't work properly with the audio/data method.
These methods have worked for me each and every time I've wanted to add artwork to a CDI image, and it should work for you as well. This will also keep the original IP.BIN files from the discs, so it should keep anything that references this information intact (like the cover art function in Redream). If it doesn't, then the rebuilt images with artwork can be used on your GDEMU and you can keep the original disc images to use in Redream or wherever.
Let me know if anything is unclear and I can clean the guide up a bit. Or if I can just share the link to my Drive with the images done and uploaded!
submitted by king_of_dirt to dreamcast [link] [comments]

How to make Dishonored Shine on PC - Redux

This game came out 7 years ago (wow) and around the time of release there was a great thread on how to make the game looks it's best on PC, sadly that thread is now buried, and two of the main contributors to it have deleted their accounts, posts, screenshots and guides. The steam threads are also dead as at some point steam changed their forum urls. Since I just redownloaded the game for my yearly play through I though I would attempt to remake the post with all the things I have learned.
I have made a small album here of screenshots here, however I am not the greatest screenshotter ever, and I didn't have too much time to make these. These are 5k shots, but I had to save them as PNG and compress them to go online, so they aren't quite as nice as the ones on my HDD. I hope they show just how clean the edges are and nice the SweetFX looks.
So to make Dishonored better we need to cover: I. Config files II. Anti Aliasing / Super sampling III. Anisotropic Filtering IV. Ambient Occlusion V. Sweetfx / shaders VI. Hud modification
I'll also add in some known issues and workarounds I've learned.
I. Config files
This is easy, as approximately a thousand years ago a user called Kakkoi made this modified ini file. It has better LOD, less pop in, and better shadows, etc. Delete everything from your current file and paste this in. Link
I'd recommend making this file read only as mine seemed to change itself back.
NOTE: this file is compatible with KoD and Brigmore Witches, however it does have a small issue. If you start either DLC the new hotbar icons (as in, the ones for powers and gadgets that were not in the base game - Stun Mine, etc.) on your shortcuts and radial wheel will be missing and replaced by a placeholder "img" icon. If you allow the config to be overwritten, this will fix itself however it also seems to undo the graphics improvements. The game is perfectly playable without the icons, and if you use no HUD and no radial menu, like I do, you won't even notice. Your call on whether you want the improvements or the icons.
II. Anti Aliasing
The anti aliasing in Dishonored is trash let's be honest, but there are multiple ways we can improve this. The overall best way is supersampling, which is extremely easy these days.
NOTE: this guide uses Nvidia's DSR, which is on the Nvidia control panel and not available to AMD users. I understand AMD has a feature that provides almost the exact same result, however I've never used it. You will have to look this up yourself however the same rules should apply.
DSR is an user friendly method of supersampling which is when you force the game to render in higher resolutions and then downsampling to your monitor resolution providing much better edges. If you are on 1080p, 4x downsampling will mean you are running the game at 4K. If you are on 1440p and you downsample 4x, you are running the game at 5K, which looks glorious.
To activate DSR, right click on your desktop and hit Nvidia Control Panel, go to "Manage 3D settings" and find the setting for DSR.
Generally people agree that DSR is only worthwhile if you use the 4x factor, with the smoothing bar at zero. This is because 4x gives you perfect pixel scaling, and the smoothness bar applies a horrible FXAA style blur to edges to try and clean them up. If you are worried about performance using 4x you can try a lower setting, however Dishonored is old and not very demanding, I have seen video benchmarks of 100+ FPS at 4k with a 1050ti.
NOTE: if you are running downsampling but also want high refresh rates (over 60hz), see the bottom of the anti aliasing section. The game defaults and locks itself to 60Hz when you use DSR but there is a workaround.
This is not the only Anti Aliasing method that can be used, you can also force AA using your graphics driver, and if you have a decent pc, you can combine both SSAA and AA. It is glorious. Again, this guide covers Nvidia Inspector. There is a AMD alternative, but you'll have to find your own guide to use it. The info in here should mostly apply.
Download Nvidia Inspector
It's a portable application so put it somewhere safe. Run it and hit this button In the top bar search Dishonored.
At the top, where you see Anti Aliasing Compatibility, copy and paste this flag: 0x080100C5 It should look like this.
*NOTE: * If you have used inspector before you may have seen other flags recommended, as there are a few that work with Dishonored. This one seems to be the least known but also the best, and it has the best quality of AA and doesn't do that weird thing some of the other flags do where it leaves a one pixel tall strip of white along the top of the screen.
Now, scroll down to the section called Anti Aliasing. "Under Anti Aliasing - Mode" click the drop down and choose "Override Application setting".
NOTE: Although we have chosen "Override application setting", for this AA flag to work, you must choose the MLAA Option in the Dishonored Graphics options.
Where it says AA Transparency Multisampling make sure it is enabled. The next two options:
Antialiasing mode Transparency AA mode
are up to you based on your required framerate and your specs. For me, I had zero problem running the game at 5k resolution, with 8x Multisampling and 8x SGSSAA. This is a little extreme however. 4x on both will be plenty. If you want to go lighter for more fps 2x on each will do. Test different options out and see what is best for you.
If your card allows the feature, make sure to enable to MFAA option too. I won't go into what it does, but it's a good thing. Once you have done all of this hit apply.
For the lazy, confused or curious this is what my settings look like.
DSR + high refresh rate bug
Now, if you want to use DSR but also want high refresh rates, you will have to use this small workaround. When launching the game and setting up high resolutions (4k+), the game changes its refresh rate to 60, regardless of what your monitor supports. It does not cap the framerate however, meaning if you are getting 130 fps on a 60Hz game you will get severe screen tearing. I don't know if this is a DX9 thing, or if the game just assumes (at the time of the game at least) 4k panels could only manage 60Hz, or some other reason. Anyway, there is a fix. Once your DSR is enabled, before starting the game, open the Nvidia Control Panel (not Inspector) and go to the section Change Desktop Resolution. [If you click the dropdown you should see your DSR resolution along with all the standard resolutions.] If you click this, you will now be downsampling everything your computer runs. It may look a little weird, and the dialog box asking you to confirm your changes may be tiny, but once we are in game it will look fine. If you load the game with your screen already running in 4/5k, for some reason the game is perfectly happy to now run at whatever refresh rate your monitor is, which is good for us G Sync users. It is a little annoying to do this every time you start the game, but for me it is worth it and it only takes a second. Just remember - when you choose your new resolution to hit yes to confirm your changes. Otherwise after 30 seconds it reverts to your native res.
TL;DR: for DSR + high refresh - downsample your desktop and then run the game at "native" res.
III. Anisotropic Filtering Anisotropic Filtering is a little simpler, in Nvidia Inspector, go to the Anisotropic Filtering section, and copy the following settings
*IV. Ambient Occlusion * Ambient Occlusion is much the same. This one impacted my framerate a little, so YMMV. Copy the following settings, and if it impacts your performance, either change "Quality" to a lower setting or turn it off. It doesn't affect the visuals too much.
V. SweetFx and Shader (optional step)
SweetFx is... A problem. At the time of writing their site has been down for a few weeks(?) making choosing a preset difficult. The Reshade site is still up, however most Reshade presets are also hosted on the SweetFx site. There is hope however, as using this archive of the Wayback Machine, to visit the Dishonored page and download presets, however the screenshots and previews aren't working so it's kind of a pot luck download. I personally use K-Putt'e's config, so I can vouch for that one being great, however they are all different and it's down to personal taste. Keep in mind this entire step is optional so if you don't want to be digging through the Wayback machine and downloading random configs you can't see screenshots of I wouldn't blame you. Keep in mind also you should always use the exact SweetFx version the preset was made for, making trying lots of different presets difficult (I don't think this is entirely true, but after a certain version old configs won't work with new SweetFx versions so it's best just to be safe, same with Reshade versions too) however you could always try making your own if you have the time and know how. That's not covered in this guide though.
So, after all that, assuming you do want to use a sweetfx, this is how to do it:
Choose the preset you want. As I said I really like K-putt's preset, it removes the washed out feel and makes certain colours pop while also darkening the tone very slightly. It removes that weird green wash the whole game seems to have and darkens dark areas perfectly but keeps the game very playable, (just grim enough.)
Download the required SweetFx version for your preset. Open the archive and extract the files into Dishonored\Binaries\Win32 (it's the folder that has the actual Dishonored.exe in it.)
DELETE the following files: dxgi.fx, dxgi.dll, SweetFX_settings.txt
Paste in the config you have downloaded, and rename it SweetFX_settings.txt, replacing the one was already in there. Alternatively you can open your config, copy all the text, and paste it into the existing settings.txt. Either way.
When you start the game, if installed correctly you should see a difference immediately when booting up. You can test this by turning it on and off. The default key to turn it on and off is scroll lock, I recommend going through various levels and areas and comparing on and off to see the difference. I can't play without this preset now, it's amazing.
VI. HUD
OK, last bit. As much as I love dishonored I hate the god damn hud, with all its spiky bits and jaggy edges. We can fix a lot of this though. From the User Interface Menu you can turn on or off pretty much anything in the game. Nice. However a few things will remain. The cogs in the bottom right when you save or autosave, the little hand symbol when you are able to pick up an object, and the black bars during cutscenes. We can set up a key to FULLY turn off the HUD, to get fully immersed.
Open:
Documents/My Games/Dishonored/DishonoredGame/Config/DishonoredInput.ini A few lines down you will see dozens of entries that start with "BaseBindings=" make a new line above these, with the following:
m_PCBindings=(Name="F6",Command="ShowHUD true")
We have now set up a hotkey that can completely turn the hud on or off. its set to F6 but you can make it anything you like. This is useful as if you want you can have the entire HUD turned on, and only pull it up when you need it, or like me you can use it to permanently have all aspects of the hud off. Good for screenshots too.
Note: There is one issue with this. Whenever you are going from one area to another, or Samuel is asking you if you are ready to leave a level, if the hud is off the clickable options will not show. You can simply press F6 to bring them back, and then click Leave Level and then switch the HUD off again.
Whew. We're done. I hope you found at least a part of this useful, some of this might be common knowledge but I wanted to get it all into one post. When you combine all of these tweaks I think Dishonored is twice as good looking. It is important to note that your mileage will definetly vary, however I think even with a weaker GPU it is worth trying a combination of all of these, as Dishonored is a very easy game to run so we might as well squeeze as much as we can out of it. With all these tweaks, 8x Multisampling and SGSSAA, downsampling from 5K I get between 80 - 130 fps (130 is the fps cap. I don't reccomend removing this cap as you can run into minor bugs such as leaning and then getting stuck leaning) For the record I have an Overclocked 1080ti, and an i7 8700k @ 5.0Ghz.
If you know of any other graphics tweaks please let me know, maybe I can add them in here.
Credit to Kakoii for the ini file Credit to K Putt'e for the SweetFX I mentioned
Thanks for reading
submitted by pheromonekvlt to dishonored [link] [comments]

An in-depth review of the "Ghost Mode" gameplay overhaul mod

As I'm sure you can all relate, the 10th Witcher Games Anniversary video brought a lot of feels. And with them came the itch to do yet another playthrough of my favourite video game. This time, to freshen up the experience, I decided to break from my tradition of only installing visual enhancement mods and look into the gameplay overhauls recommended on the sub.
To my surprise in-depth assessments of these mods were nowhere to be found. True you can look up detailed descriptions of what they change, but that won't give you an impression of how the changes work in practice nor an objective look at how they impact the overall experience. Thus the goal of this thread is to help you decide if you would enjoy using "Ghost Mode" for your next playthrough and to serve as a resource for posterity.
Note: the title of this post is no misnomer. This is a long read. If you already have an idea of what the mod is about and are just wondering "if it's any good", then feel free to skip to the TLDR rating section at the bottom.
 

Setup

First thing first, all the changes introduced by the mod remain true to the vanilla feel, flow and story of the game. There is no need to worry that the game you know and love will suddenly be unrecognisable, that you won't know your arse from your elbow. Secondly, I do not plan to rehash the full changelog in this review. Changes from Vanilla will only be mentioned if they are relevant to the point I am making.
Dsiclaimer: this review is written with the above in mind. I do not claim my experience to be completely exhaustive. For example, things which were difficult or annoying for my setup might be trivial for others and vice versa. Your mileage may vary.
 

General Gameplay

The mod has been implemented in a competent way. I did not notice any performance decrease compared to Vanilla and encountered no game breaking bugs. There was only a single major issue in 2.6 which was repeatable and highly annoying, but thankfully it seems to be fully fixed with version 2.7.
Immersion has been improved and the game world is more believable. Some examples:

Quests and Experience

The way the experience penalty works has also been changed. Previously you would get 100% of quest experience if you were at most 5 levels above the quest level, and basically 0% if you were 6 levels above or more. Now for every level you are above a quest the experience reward is reduced by 16%. This also works the other way around, you will receive an experience bonus for doing quests which are higher level than you.
This way you get the best of both worlds. You get to tailor the quest order to your liking, without having to suffer meta-gaming pressure, and at the same time Geralt will not end up overlevelled.
 

Combat

This is usually the number one reason why people recommend this mod and it is clear to see why. The author has implemented a great number of improvements to nearly all of the vanilla systems. Combat is more challenging and rewards players for their skill and preparation better. Geralt's overpowered traits and abilities have been toned down and your specialisation makes a much bigger difference to how you approach fights.
Overall, most battles are more fun with GM compared to vanilla. However this comes at a cost: namely the "realism", feel and flow of combat have all decreased to facilitate the above. Let's examine the 4 main areas where GM changes combat and evaluate them in detail.

Enemy behaviour

The first thing you will probably notice is that "all enemies have a reduced reaction time". The reason I put quotes around that phrase is because I don't know the actual inner workings of the mod and precisely how it has modified the AI scripts. Therefore I am just calling the effect as I saw and experienced it during my playthrough. The easiest way to describe it is: the time frame between you being in range of an enemy and the enemy starting their action is now much lower.
The primary effect of this change is an increase in difficulty. You now have to have faster reflexes in order to be able to dodge enemy attacks. Additionally, enemies will spend significantly less time in a hit recovery state after you land a blow. Which means that you won't be able to chain as many attacks as you could before, since your enemy will dodge/retaliate much more rapidly.
This change really shines when it comes to boss fights. The faster enemy reaction time forces you to play by the boss' rules and pay attention to their mechanics, rather than treating them as a higher health & damage generic enemy. To give a concrete example, let us look at the Olgierd fight at the burning manor.
In Vanilla you can easily beat him on Death March by ignoring the fight's mechanics. You simply position yourself slightly outside of his melee range and start a rend which he walks into. Then you follow this up with a quick dodge to the side to avoid the sand in the eyes and immediately start another rend. The boss gets locked in the above AI loop and you win pretty easily. The reduced reaction time in Ghost Mode counters this perfectly. By the time you are winding up your rend the boss, instead of walking into your sword, starts his own attack which targets where you will be after you swing and hits you before you can deal any damage.
So to beat him I had to actually play by the rules, which means conventional sword swinging is out of the question, especially as you also leave yourself open to a quick counter attack which kills you in 2-3 hits. The rules in this case are: counter his attack, swing once and go on the defensive. There are three different attacks he throws at you:
  • The red charge: when you are far away from him, it is the easiest to counter and the bare minimum required to win. If you can only counter this then you will win, but it will take ages.
  • The phase charge: is when he turns semi transparent and steps side to side. He only does this if your are slightly outside of melee range, so you have much less margin of error on your counter. If you are quick enough you can counter this type of attack with a close to 100% success rate, which means that a better player can defeat him much more rapidly.
  • Finally we have the slash combo, which he does when you are in melee range. This one is also counterable, but the reaction time is so small I didn't feel it was worth the risk. Especially because if you fail it and only parry you will be locked in that stance for a few of his hits which will drain your stamina significantly (and you cannot counter without stamina, but more on this topic later).
So as you can see from the above GM makes you pay attention to the intended mechanics and rewards skilled play.
The change to reaction time also has its downsides however, and they are major ones. Most notably, enemies which have extremely fast attack animations by default become unfair in melee combat. Especially if they are in a group. The best example of this problem are all of the insectoid type enemies like the endregas and the kikimores. Their attack animation is fast and when you pair it with an increased aggression and run speed it means that you literally cannot attack them preemptively. If you start any type of attack (without dodging one of their attacks or parrying first) they will strike you first, even if you were outside of their melee range when you initiated your swing. As you can probably tell fighting groups of these enemies is extremely annoying especially early on. Later you can cheese them by unloading your entire reserve of Dancing Stars & Northern wind bombs for some semblance of crowd control, but even that is like putting a plaster on an amputated leg. What's strange is that looking at past feedback numerous people have complained about these enemies, throughout the mod's life cycle. Yet the author has failed to address the problem, which is that they shouldn't have reduced reaction time in the first place. Such empty difficulty, only for its own sake is never good.
Another downside is that early on you cannot take on groups of certain enemies, like wraiths, nekkers or insectoids for example, without resorting to AI abuse. This probably only applies to the higher difficulties, but when the best way of beating groups in the early game is dragging enemies one by one to the edge of their AI leash it doesn't feel good. No matter how skilled you are in melee combat you cannot defeat such packs head on without numerous deaths, which doesn't make you feel like a witcher at all in those encounters.
Finally, GM also implements monster "dodge" with a much more heavy handed approach compared to Vanilla. All sorts of enemies will now dodge your attacks more frequently. This is yet another example of where combat quality was sacrificed in order to increase combat difficulty. I write "dodge" in quotation marks because normally the word implies that the enemy sees your attack and reacts to it by getting out of the way. This mod makes the enemies which "dodge" the most feel like blatant AI bots with rigid if-then logic in their script, which harms immersion. Some examples:
  • Enemies dodging mid attack, when it makes no sense for them to do so
  • Werewolves dodging while airborne in the middle of their lunge
  • Humans dodging attacks that come from behind them and they cannot see
  • Shrieker glitching into its "on the ground" dodge animation while flying, after being shot with a crossbow
  • Occasionally enemies dodging attacks while burning, sirens dodging when knocked down etc.

Skill Balance changes

A lot of adjustments have been made to the skill tree in order to improve how balanced Geralt is in combat. The changes can mostly be summed up by saying "baseline Geralt was nerfed". What that means in practice is that witchering aspects you do not invest points into will be significantly worse compared to vanilla. For example the signs, crossbow and damage bombs are a lot less useful for my mainly sword focused build. This is a good thing as specialisation encourages more diversity in your playstyle. Here are some examples:
  • Quen no longer always blocks at least 1 attack, regardless of how much damage it's supposed to absorb. Now it's no longer the combat crutch it used to be in Vanilla as it will only absorb the value of the shield and the rest of the damage will go through.
  • Poison and bleed effects are no longer extremely overpowered boss monster killers. Their duration and damage are significantly reduced to the point where 1 poison application is equal to about 2 additional sword attacks. Still good, but now balanced.
  • Crossbow & Bombs now only deal half damage if they were auto aimed. And of course manual aiming during combat is way too slow unless you have invested into the related skills. There seem to be a few minor bugs related to these items. For example manual crossbow shots sometimes don't bring big flyers down despite hitting them successfully. Superior Samum, manually aimed, dealing 5 (yes five) damage on kikimores.
  • In general overpowered skills have been nerfed (rend, whirl, euphoria etc.) while underpowered abilities have been buffed (crippling strikes, undying, counter attack etc.).
Overall the skill tree feels significantly more polished and we now have a lot more viable choices to pick from.

Defensive techniques (dodge, roll, counter, parry)

The way dodging and rolling worked in Vanilla was a simple binary check. Did you press the appropriate button before the attack connected with your character? If yes then avoid all damage, regardless of where your character ended up going (for attacks which can be dodged). And while this was still a big improvement from the second game, the i-frames were way too generous and the moves lacked any stamina cost. Which made it all to easy to just spam the dodge button and be invulnerable. GM changes this behaviour by also taking into account the direction Geralt moves in when dodging/rolling with respect to the enemy attack. Now if you dodge in time but still end up connecting with the attack, depending on the angle, you will take partial damage and debuffs based on what direction you were going in.
Parrying and countering have been significantly enhanced compared to the base game. Essentially now you can parry/counter nearly all attacks, those coming from monsters included. Taking counters as an example, you may counter light attacks just like before - by reducing all incoming damage - but now you retaliate against monsters with a "counter slash". This also applies to heavy attacks (including hammer and spear wielding humans) except that damage is reduced only by 50%. Both parry and counter now have a stamina cost depending on the attack you have deflected. This is a great addition to the game in my opinion. It plays perfectly with the risk and reward scale. Countering carries a greater reward because you spend your time negating the monster attack and dealing damage on your own, instead of just negating as you would with a dodge. However the risk is also greater because you confusing monster light and heavy attacks means you will take significant damage, especially if your build is not prepared for it. Yet another gameplay element where skill is rewarded.

Armour, stamina and different playstyles

Stamina management is now a big part of combat, rather than a mere afterthought with Tawny oil. The base regeneration rate is significantly reduced, all combat actions pause this regeneration for a short while and counter and parry stamina costs are increased. The armour you are wearing now also affects your stamina more than the Vanilla regeneration penalties. Light armour has no penalties and increases stamina regen, medium armour introduces a stamina cost for rolling & sprinting and heavy armour has stamina costs associated with rolling, dodging and sprinting.
Armour now plays a much bigger role in the game thanks to its significantly increased damage absorption capabilities. Plenty of enemies now have high armour values which also makes the armour penetration stat on swords better. To help with this, your heavy attacks now have a significant amount of armour penetration by default. This means that quick attack spam is no longer maximum dps against all enemy types and you will have to mix in heavy attacks much more frequently. Some enemies like golems are so heavily armoured that using quick attacks against them is basically pointless. Similarly, high armour values on your gear now make a big dent in the incoming damage whereas in Vanilla they were useless and the only thing that mattered were the resistances on the gear.
Both of these changes together translate into very distinct melee combat playstyles depending on which Witcher set you are wearing, which is one of the best features of GM for me.
  • Light Armour: the Cat set provides the combat experience which is closest to Vanilla DM, with a few important tweaks. Firstly, because you have very little damage reduction, Quen is practically useless. It won't even fully absorb a light attack from a drowner. This combined with the change to the defensive techniques means that you actually have to be quick on your feet and good at dodging, you can only rely on your own skill. Secondly you can also mix in counters for increased dps once you are familiar with the attack patterns of the enemies. However you still have to dodge heavy attacks due to your lack of defence. This makes the Feline armour playstyle a skillful dance combining counters & dodging which is extremely fun, especially against bosses and small enemy groups.
  • Medium Armour: the Wolf set is a bit of a jack of all trades, master of none. It has less damage compared to the Cat but more defensive stats and armour. This essentially means that your playstyle is similar to the Cat but you reduce some of the risk and settle for a lesser reward. You still can't afford to counter heavy attacks, but at the same time the stamina penalties for sprinting and rolling are mostly irrelevant as the latter is only necessary to get out of the way of enemy AoE attacks. As a result you will be safer against large groups compared to the cat but will have to settle for reduced offensive capabilities.
  • Heavy Armour: the Bear set in GM presents a markedly different combat experience compared to vanilla. The quickest way to describe it is as an "immovable object". The stamina cost for dodging means that you will spend all of your time holding your ground and countering ALL enemy attacks (apart from AoE). The high armour value and damage resists mean that you can shrug off heavy attacks with ease. Combine this with talents that use adrenaline to heal you and an Ekkimara decoction to create a true tank build. However, due to the slow stamina regeneration signs are pretty much out of the question because every sign costs 10+ counter attacks leading to a big dps loss. This playstyle is extremely fun against groups of enemies because it allows you to combine defense with offense and simultaneously negate enemy damage. It also has its weaknesses - namely big enemies and bosses who make heavy use of area effect attacks, such as Griffins and Imlerith for example. Overall I didn't spend much time testing this playstyle in my run, but I found it very satisfying and fun. Definitely keen on using it for a complete playthrough in the future.
 
Another highlight of the GM combat enhancements are the 1v1 fist fights (seriously). They are much more challenging, fun and skill intensive due to the reworked stamina system. In Vanilla these were pretty formulaic - keep your distance from the opponent so that they only lunge with a heavy attack, which is easier to counter compared to the fast jabs. Counter it, throw a one-two and then rinse and repeat. In Ghost Mode you no longer have the stamina to consecutively counter all attacks and must spend some time in between counters to recover, which introduces a great deal of tension and makes the fights more skillful. Remember, dodging pauses your stamina regeneration so you don't have an easy way around this. Especially as many arenas are quite small which make this process challenging. Furthermore blocking jabs costs significantly less stamina, so if you're confident in countering the opponent's fast attacks you have a great opportunity to skill display. In addition group fist fights are a lot easier compared to Vanilla, because the opponents aren't health sponges. This is another great change in my book as those were pretty tedious and the fist fight system doesn't really work great for group combat.
 
Finally, to finish off this section, I would like to spend some time looking at enemy balance in the Blood & Wine expansion. There were several problems with it in my opinion, which overall decrease the quality of the experience.
  • Giant centipedes deal too much damage. Yes they are generally easy to avoid, however them one shotting a character in master crafted Feline Gear + Quen + Superior Insect Oil + Protective Coating + 600 hp green mutagen at full life seems excessive. I'd suggest a 30% damage nerf. For comparison, level appropriate Giant Centipedes hit harder than red skull cyclopses and werewolves.
  • High concentration of monsters which work badly with the reduced reaction times due to their instant attacks.
  • Arachnomorph damage seems to be balanced against them hitting you once when most of the time they double tap you, which enables 1 small spider to pretty much instantly kill you from full life if you make a mistake. Damage should be reduced by at least 40%.
  • The two Guardian Panthers in the Professor Moreau quest are extremely overtuned for when you face them and, as a consequence, require extremely cheesy strategies to beat.
  • Alps are probably the hardest enemies in the whole game. Thankfully you only have to fight them twice. The first one's alone and she's manageable, but the second involves you getting tag-teamed by a Bruxa as well and that one is quite painful. It's a good thing Dettlaff can mind control other "lesser" vampires, because otherwise one of those ginger vamps would easily wipe the floor with both him & Regis at the same time.
 

Items and crafting

  • Witcher set bonuses now scale with the number of pieces equipped rather than being binary. Bonuses also apply from the lowest set tier and not just Grandmaster level. This is a good change in my book as they diversify your combat style from an earlier stage of the game. Set swords are no longer the best weapons for their level requirement, so exploring the world and doing contracts for relics feels much more rewarding.
  • The weapon & armour upgrade kits, sold by master craftsmen, are a great addition to the game. They allow you to increase the base damage/armour of your equipment by increasing its level requirement by 1 (i.e. the Aerondight effect). This enables you to make use of those special relic swords like: Hjalmar's Steel Sword, Pang of Conscience, Blade of the Bits, Winter's Blade etc. from the moment you obtain them to as long as you wish. This means that you must only pick a weapon based on if its secondary stats have synergy with your build, and this opens up a lot of choices and min-maxing.
  • Speaking of special relic swords, these now have significantly improved secondary stats which makes them stand out from the generic random relics. Depending on your build you will probably end up using one of these for most of your playthrough. It feels great to get a "special" sword reward for a quest which is actually useful and not vendor fodder like in Vanilla.
  • Equipment crafting now requires significantly less materials, so you are no longer forced to dismantle an entire army's worth of arsenal to craft something. Unfortunately the craftsmen will now rip you off much harder, comparatively to Vanilla, with their fees. So if you want to unlock all the levels of the Runewright and deck out Corvo Bianco in the various Witcher sets you will still have to pick up and vendor massive amounts of loot.
  • Crafting costs of random weapons in the early game, before you can access sets and contract relics, are prohibitively expensive.
  • White Gull isn't so difficult to produce anymore as it doesn't require Redanian Herbal and you can craft the Mandrake Cordial yourself, white honey now comes with more charges - both are nice QoL changes.
  • Potions and bombs require significantly less ingredients, so theoretically you would need to spend less time picking flowers. However considering that you could buy most of these cheaply from herbalists in the vanilla game (and still can) this change is more or less irrelevant in practice.
Cooking recipes are a good addition to the immersion in my experience. A witcher on the path should be able to cook himself a meal while squatting in some untamed wilderness. Unfortunately, in practice I did not use these recipes at all after leaving White Orchard. There are a few problems with the current implementation:
  • Food & drink healing is not balanced according to the amount of ingredients required to produce. For example, right at the start of the game you can learn how to make apple juice which is in the top tier of drink healing and costs next to nothing to make, in contrast with other much more expensive drink recipes which very often heal for less. Food recipes require way too many ingredients (the vast majority of which must be bought) and offer sub par healing in comparison.
  • Human enemies in Velen and onwards drop way too much food, often between 2-3 pieces each. Why should I waste money buying ingredients and cooking when I could obtain something nearly as good for free?
  • Coking recipes are too expensive for what they offer. They could use a 50% coin cost reduction across the board. Food recipes should require less ingredients. There should be more distinct healing "tiers" for different food & drink, less total recipes and bandits should drop less grub to incentivise people to interact with the system.
 

Nitpicking

  • Enemies focusing more on NPCs during combat (if present) makes certain escort quests significantly more annoying on Death March: namely the Black Pearl and the Skellige mine clearing duo. Those NPCs could use a buff to their survivability.
  • All wolves/dogs & boars are significantly weaker compared to the vanilla game. Probably a design decision, but it feels out of place since all other enemies are harder. Wolves in the Land of a Thousand Fables do have level appropriate stats unlike all their siblings for some reason.
  • Kinks to the extra books/notes feature: fist fight quests keep giving you the same note after a brawl for every brawl, many texts are given out at weird times. For example, right at the beginning of some action sequence.
  • Early game bosses and contract monsters (level req < 15) could use a modest health reduction to prevent boredom. Later on the only enemy that felt too "health spongy" was Iris' nightmare. Those Olgierds could use a health reduction because at the moment the fight is quite repetitive, lacks the atmosphere of the burning manor fight and so becomes a bit tedious.
  • The base Yrden duration is too short and makes fighting Wraith bosses extremely tedious early on, until you get Enhanced or preferably Superior Moon Dust.
  • Superior Cursed Oil now requires berserker skin which is not obtainable in Skellige if you investigate the massacre with Ceris. Previously there was a bug where berserkers spawned near Kaer Morhen, but this seems to be fixed in the newest version. The only place I found berserker skin in the whole game was in the Borsodi vault (?), dropped by one of his guardsmen (??). Either put a copy of the ingredient somewhere in the Vildkaarls' village, or change it to some other more lore appropriate place. The current location makes no sense.
  • The inventory weight system is at best a sidegrade to Vanilla. Yes, it is unrealistic that Geralt is able to hold all these weightless ingredients in Roach's saddlebags. So this mod now gives them weight and forces you to regularly deposit all your ingredients in the stash. Then to access them more conveniently every time you are at an appropriate vendor (alchemist/blacksmith/armourer) Geralt is able to telepathically access said stash to obtain the ingredients. To me it seems like one unrealistic element was simply replaced with a different one equally as unrealistic, so what's the point?
    • In all fairness you can reduce the weight of all items from the mod options, but that slider leads to even more immersion problems. Because if you wish to compensate for the weight on all the ingredients you have to turn up the slider so much that all the swords and armour now weigh practically nothing as well. A better solution would be keeping the weight slider and adding a check box for "Zero ingredient weight", or just using the vanilla weight system because the current implementation isn't a clear improvement.
  • I find the name of the mod to be a bit unfortunate, since it has nothing to do with any of the content. Makes you wonder if it's one of the reasons why it is not more popular.
  • Grapeshot seems to deal insignificant damage to higher level enemies. Superior version of it hits arachas for 5 damage with an aimed shot for example. Even without bomb talents it shouldn't be this weak.
  • Aerondight has lost a great deal of its unique flavour (all items can now be upgraded) and the nerf to its secondary stats was too great. Before it would give 10% attack power per stack, up to 10 stacks, now this has been reduced to 5% crit damage. For comparison, random relic swords can spawn with 60%+ critical damage and have 4 other secondary stats as well. Not to mention free sockets, which cost ~8000 gold for Aerondight. Finally, while the bonus at maximum stacks is still great it's now harder to maintain due to the decreased enemy reaction time, is basically non-existent against all the instant attack foes (and for heavy armour builds) and has overlap with several consumables (thunderbolt potion & oils now give crit chance) and talents which reduces its effectiveness even further. Overall the sword feels underwhelming and not worth using.
  • Olgierd's sabre, Iris, no longer gains charges when enemies block your attacks and doesn't buff the damage of the fast attacks. To compensate it now deals 10% of target's maximum life in addition to the other bonus damage when charged. I was very excited to use this sword with the new item upgrade kits and was left moderately disappointed. The life loss penalty is still too big and basically forces you into using Katakan decoction which doesn't feel great. Furthermore, to charge the sword you must deliver 3 successful fast attacks in succession. Against armoured enemies this feels horrible as you're effectively whacking them with a wet noodle until you can charge the finisher. In addition, humans are much more likely to dodge your attacks compared to before causing you to often whiff on the charged strong attack while still paying the health cost. Overall the sword is still worth using and feels satisfying with the Severance runeword, however I would like to see some quality of life change: for example halving the health penalty.
  • This mod breaks the following achievements: equipping a full witcher set (Armed and Dangerous), equipping all the grandmaster set pieces (Dressed to Kill), equipping Aerondight (Embodiment of the Five Virtues). Tested on GoG. Probably irrelevant for 99% of people, but worth mentioning.
  • The Undvik set has less armour than the basic Feline set, despite having a higher level requirement and being heavy armour.
  • Superior Full Moon heal, based on current toxicity, either does not work or heals a minuscule amount.
  • Kill count bestiary section feels a bit too arcade-y and gimmicky for my tastes. Would prefer it hidden at the bottom of the list and collapsed by default or, better yet, an optional toggle in the mod options if possible.
 

Scoring (TLDR)

I will now attempt to rate this mod based on an arbitrary scale I just made up. A score of 5/10 means that overall the mod neither improves nor deteriorates the experience when compared to the original game. A higher score than that is good, lower is bad.
  • -1 for the fast reaction times on enemies with instantaneous attack animations (and the fact that this hasn't been fixed for so long) and the balancing issues of Blood & Wine.
  • -0.5 for the overall lowered quality of the combat experience: namely its feel, flow & realism.
  • -0.5 for all the points listed in the Nitpick section.
  • -0.5 for the experience penalty system which promotes meta-gaming and for the subpar support of the NG+ mode
Overall: 7.5/10. Despite the occasional hiccups I thoroughly enjoyed my playthrough with Ghost Mode. I found the mod to be an overall improvement to the base game and definitely recommend it.
 

Never Asked Questions

Q: What difficulty should I play on?
A:
  • You are looking for a similar challenge to vanilla Death March or early game B&BB, to see if you like the other gameplay changes? Story & Sword. If you don't care about the combat then I would suggest that you also reduce monster damage from the mod options.
  • You played on Death March from level 1 and found it too easy? Blood and Broken Bones.
  • You played on Death March from level 1 with self-imposed limitations such as: no Quen, not using set swords, deliberately skipping some of the best talents and found it too easy? Death March.
 
Q: What build did you use?
A: Combat/Alchemy - GM Death March
I went for delusion & poisoned blades first. Muscle memory & strength training second, then back to alchemy for protective coating, afterwards filled out the combat tree. Undying was only equipped once the first B&W skill slot was unlocked and I could move an alchemy skill there, on lower difficulty levels I would replace it with Razor Focus. Delusion is optional. I pick it mostly for RP reasons although the extra stamina regen is nice, especially early on. If you don't want to use it then replace it with the Synergy skill from the alchemy tree.
 
Q: Any other interesting stats/tidbits from your run?
A:
  • Hardest 1v1 fight: werewolf outside of the Whispering Hillock, ~10 deaths.
  • Other boss fights with number of deaths in parenthesis: WO Griffin (1), Imlerith (2), Toad Prince (0), Olgierd (3), Caretaker (1), Olgierds (2), Caranthir (0), Eredin (1), Dettlaff (0)
  • Hardest group fight: arachas cave south west of Harviken on Faroe, 8 deaths.
  • Found the "Tor Zirael" sword for the first time ever in 4 playthroughs, not sure if finally lucky or spawn chance increased in the mod. Unfortunately, stats wise it's still rubbish.
submitted by Paskoff to witcher [link] [comments]

How To Install GTA IV In Windows 10✔ How To Fix Graphics Problem In GTA IV YouTube Copyright & Fair Use Policies - How YouTube Works Free Bitcoin Hack Script ReactOS 0.4.4 Installation and first look How To Create Sticky Navigation in Elementor Landing page? On scroll menu in Elementor Tutorial

Solver is not installed automatically, when you install Excel - it is an option. To use Solver it must be referenced as an AddIn in Excel's "spreadsheet part," and if you want to use it with VBA, it must also be referenced here. The VBA reference is set from "References" in the VBA editor's Tools menu. In Excel 2003 it looks like this: If Solver is not on the list, you have to click "Browse ... 6 Tools to Forcefully Terminate a Full Screen Application or Game with Hotkey Remotely Enable or Disable ... very much, for winabler , it works great, helped me a lot, such a good thing i could find it, otherwise i would have installed windows 10 again , because of a grayed button. Reply. smayer97 3 years ago. Just found a note for myself that reminded me that you need to run Windows Enabler 1 ... Eclipse typically restarts after a plugin is installed. They do this using a wrapper eclipse.exe (launcher app) for windows. This application execs the core eclipse runner jar and if the eclipse java application terminates with a relaunch code, eclipse.exe restarts the workbench. You can build a similar bit of native code, shell script or ... Tutorial 4 - Updating your application¶. In the last tutorial, we packaged our application as a native application. If you’re dealing with a real-world app, that isn’t going to be the end of the story - you’ll likely do some testing, discover problems, and need to make some changes. Quickly learn how to create input boxes (with both the InputBox function and the Application.InputBox method) using macros with this step-by-step tutorial. Includes 9 easy-to-adjust practical code examples you can use right now. and binary libraries. The project can be compiled using an external compiler tool and the most common Integrated Development Environments (IDEs) are supported (System Workbench for STM32 with GCC compiler, Keil µVision, IAR Embedded Workbench). An STM32 Nucleo board is then programmed by the generated binary file. When the firmware is executed it starts reading data from the selected sensor ... The CPack module generates binary and source installers in a variety of formats using the cpack program. Inclusion of the CPack module adds two new targets to the resulting makefiles, package and package_source, which build the binary and source installers, respectively. The generated binary installers contain everything installed via CMake’s INSTALL command (and the deprecated INSTALL_FILES ... Documents details of the user interface and the SAS language that are specific to the Windows operating environment. New information for SAS 9.2 includes details about running SAS under Windows Vista, Clean Work utility, submitting SAS jobs in batch mode, using a remote browser to run SAS, and running SAS under Windows 7. You can also see Packaging your application for the Kivy Launcher to run kivy programs without compiling them. For new users, we recommend using Buildozer as the easiest way to make a full APK. You can also run your Kivy app without a compilation step with the Kivy Launcher app. Kivy applications can be released on an Android market such as the Play store, with a few extra steps to create a ... It is possible and is deceptively easy: "Publish" the application (to, say, some folder on drive C), either from menu Build or from the project's properties → Publish.This will create an installer for a ClickOnce application.; But instead of using the produced installer, find the produced files (the EXE file and the .config, .manifest, and .application files, along with any DLL files, etc ...

[index] [14196] [15938] [28599] [27322] [25920] [5116] [16636] [18836] [22886] [9085]

How To Install GTA IV In Windows 10✔

IN This Video I Have Showed You How to fix Graphics problem In GTA IV, Which Is 100% Working. Hope That It Will Help You. The procedure Is So Easy If You Done This With my Way. METHOD--To Fix ... How to Fix Windows 10 Startup ProblemFollow this guide at your own risk, make sure you backup all your data before you continue.Windows 10 Startup problems are ... However, Fair Use is determined on a case by case basis, and different countries have different rules about when it’s okay to use material without the copyright owner’s permission. In the U.S ... Let's check how to set Sticky Navigation Menu on the Elementor page. Make sure if you installed Master Addons free plugin in your Website. Otherwise, it will not work. The sticky nav menu is not ... Also copy crack folders launcher file and gta iv exe to gta iv directory. 9. Now to hit enter on gtaivTu.exe and after that the main gta iv exe will be installed. It installed quickly, with no surprises. Provided with dependencies, OBS installed promptly as well. Will it run though? Nope. Oh well, obs32 is not a valid Win32 application. I don't know what I ... Script Price = $100 CONTACT ME " [email protected] " binary options indicator binary options binary options strategy binary options trading binary options auto trading trade trading trader forex ...

http://arab-binary-option.memehyva.ml